curl-library
Re: how libcurl receives chunked data from http server?
Date: Mon, 22 Sep 2003 23:52:11 -0700 (PDT)
> > why is it not designed to receive the whole response at a
> > time? in single buffer?
> Because that would
> A) take LOTS of memory, and
> B) prevent libcurl to be able to download lots of files
> (due to the amount of memory being smaller than the file size).
Not to mention:
C) There is no absolutely no way to know for sure in advance
how big the buffer needs to be. Some servers will send a
content-length header, but you certainly can't alloc to
that size, and then willy-nilly dump the whole file there!
My solution would be to write the file to a stream and then
save the stream to disk. Then you will know *exactly* how
big your buffer needs to be. You can do this easily with
libcurl, just the way it is.
And before you scream "but disk I/O is too slow!" think about this:
A) How long it takes to write/read a file from disk.
B) How long it takes to transfer the same file across the net.
C) How long it takes to change curl to make it do what you *think* you want.
If I were you, I would forget about all these "mental" optimizations.
Until you have written, debugged, tested, and profiled a working
prototype of your application, I really don't think you have any
place to say that some "theoretical" bottleneck is in libcurl !
My two cents,
- Jeff
__________________________________
Do you Yahoo!?
Yahoo! SiteBuilder - Free, easy-to-use web site design software
http://sitebuilder.yahoo.com
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
Received on 2003-09-23