curl-library
Re: how libcurl receives chunked data from http server?
Date: Tue, 23 Sep 2003 10:58:38 -0700
> Subject: Re: how libcurl receives chunked data from http server?
>
> > > why is it not designed to receive the whole response at a
> > > time? in single buffer?
>
> > Because that would
> > A) take LOTS of memory, and
> > B) prevent libcurl to be able to download lots of files
> > (due to the amount of memory being smaller than the file size).
>
> Not to mention:
> C) There is no absolutely no way to know for sure in advance
> how big the buffer needs to be. Some servers will send a
> content-length header, but you certainly can't alloc to
> that size, and then willy-nilly dump the whole file there!
nobody can know that in advance for sure.
libcurl can't, my application can't either : )
the thing is I have to do it in my specific case
I need to receive huge image binary data in consecutive memory and
manipulate it. ( my app doesn't handle files.)
I need to have a single buffer with initial size to keep the whole response
and realloc its size if necessary
>
>
> My solution would be to write the file to a stream and then
> save the stream to disk. Then you will know *exactly* how
> big your buffer needs to be. You can do this easily with
> libcurl, just the way it is.
>
> And before you scream "but disk I/O is too slow!" think about this:
>
> A) How long it takes to write/read a file from disk.
> B) How long it takes to transfer the same file across the net.
> C) How long it takes to change curl to make it do what you *think* you
want.
I don't have A)
but I do have B) and B) is always the bottleneck in most cases
For C), I just want to avoid unnecessary copy, seems like I have to change
codes on my side in order to work well with libcurl
Or, I need to patch libcurl to fit my need
Thanks, Jeff
- Jerry
-------------------------------------------------------
This sf.net email is sponsored by:ThinkGeek
Welcome to geek heaven.
http://thinkgeek.com/sf
Received on 2003-09-23