cURL / Mailing Lists / curl-users / Single Mail

curl-users

Re: Curl commandline, "retry" issue

From: Daniel Stenberg <daniel_at_haxx.se>
Date: Tue, 8 Jan 2008 16:27:55 +0100 (CET)

On Tue, 8 Jan 2008, Philippe HAMEAU wrote:

I'm cc'ing this reply over to the curl-users list since this mail is about the
curl command line tool and about code that's command line only. This is not a
libcurl issue, afaics.

> First issue : it looks like the amount of data gotten at every try is "trown
> away" (= discarded ?). Is it really what happens, or is bad display ?

No, it is indeed actually what happens. The retry code is clearly not really
properly written to deal with timeouts. It assumes a transfer that fails
possibly got bad data so it truncates the file at the size it was prior to the
last attempt. This is just wrong. At least for the timeout case (that your
test cases shows), I think it should resume from where it currently is.

I can't say it's top prio for me though.

> Second issue : in the end, the file is broken. I guess it's a problem with
> continuing.

That's just a blind guess on your behalf. What makes you say this?

> Maybe some garbage bytes are received when a connection is about to die.

That's not normal. What makes you think this happens?

> Is there a way to tell the "--continue " directive to rollback some, let's
> say, 4K ?

Nope.

-- 
  Commercial curl and libcurl Technical Support: http://haxx.se/curl.html
Received on 2008-01-08