curl-library
Re: Partially downloading a web page
Date: Thu, 3 Aug 2006 16:41:35 -0500
On 8/3/06, mark harrington <god_of_war_at_hotmail.co.uk> wrote:
> I'm trying to download part of the html code of a web site. I have attempted
> to use the libcurl library and have managed to achieve this for some web
> pages using CURLOPT_RANGE. However I cannot get it to work with the web page
> that I want to do this to, it would seem that the server does not support
> ranges. Does anyone know any way of getting round this.
>
> The part of the web page I want to download is at the beginning of the file.
> If it's not possible to get round the range issue, is it possible to
> terminate the download after a specified number of bytes has been received.
> I have looked in the libcurl library but cannot determine how to do this.
There's probably a more elegant way to do this, but you could use
something like the callback used in the getinmemory.c examples
program. Instead of saving to a file, it saves to a buffer in memory.
You then do whatever you like with the buffer. In your case, you
could simply write out however many bytes you need.
Ralph Mitchell
Received on 2006-08-03