curl-library
Re: How does libCurl handle Content-Encoding GZIP + partial responses in respect to automatically decoding of compressed content
Date: Tue, 1 May 2007 16:43:03 +0200 (CEST)
On Tue, 1 May 2007, Stefan Krause wrote:
> Might it theoretically happen that all the compressed data has to be
> received and stored on the heap
Yes sure, if the entire data is smaller than a deflate block then I guess we
can say that all the compressed data has to be received and stored on the heap
before it can be decompressed.
But I guess that's not what you're asking? gzip compression is indeed possible
to deal with "streaming".
> From my point of view it is safer to disable automatic decompression with
> CURLOPT_HTTP_CONTENT_DECODING and save the received (compressed) data to
> disk. After data has been received completely I ran a separate decompression
> job over the compressed data.
You do what you think is best of course. The main advantage with letting
libcurl do it is that the application can remain totally ignorant about the
content being compressed or not and how to decompress compressed content.
> Just to clarify the handling of 206 responses for myself:
> The server sends a 206 response with a bunch of compressed data and the
> Content-Encoding header set to GZIP.
To libcurl, a 206 is no different than a 200. The responses follow the same
rules for encoding in both cases.
>> libcurl doesn't handle compressing data on uploads.
>>
> OK. So here I have to compress the data first (e.g. create a new file with
> the compressed data) and then send parts of that new file with HTTP POST
> range requests to the server. After the data is uploaded I delete the
> compressed file. Is that right?
Sure, or you can compress the data on demand in memory if you want to avoid
the temporary file. Slightly trickier though.
-- Commercial curl and libcurl Technical Support: http://haxx.se/curl.htmlReceived on 2007-05-01