curl-and-python
Re: aborting a transaction
Date: Mon, 28 Sep 2009 13:06:28 -0700
Hi Daniel,
On Sun, Sep 27, 2009 at 11:36:17PM +0200, Daniel Stenberg wrote:
> On Thu, 24 Sep 2009, Daniel Stenberg wrote:
>
>> I suggest we just make libcurl stop at 100K and then consider the rest
>> not a HTTP header anymore - or perhaps consider it an illegal/bad
>> stream and bail out. Any other opinions or perhaps nods?
>
> I've just committed a change that will cause libcurl to bail out entirely
> if a received HTTP header is longer than 100K. It will return
> CURLE_OUT_OF_MEMORY as it is kind of a memory-related problem and I
> couldn't really think of any other existing error code that would suit
> better.
Thanks for applying this fix to the header parsing logic. I had a
related question about CURLOPT_MAXFILESIZE.
The documentation currently says that, "The file size is not always
known prior to download, and for such files this option has no effect
even if the file transfer ends up being larger than this given limit."
I understand that we may not always know the download size ahead of
time, but in many cases libcurl consumers are using the default write
function, and haven't installed their own by settin
CURLOPT_WRITEFUNCTION. In the case where the default write function is
used and MAXFILESIZE is set, would it be reasonable to have the
writefunction check if the maximum file size has been exceeded and
return CURLE_FILESIZE_EXCEEDED if so. This would mean that most callers
get the check by default, making their implementations more secure. I
would expect anyone writing their own WRITEFUNCTION to need to check
this error condition on their own.
Thoughts?
Many thanks,
-j
_______________________________________________
http://cool.haxx.se/cgi-bin/mailman/listinfo/curl-and-python
Received on 2009-09-28