cURL / Mailing Lists / curl-users / Single Mail

curl-users

Re: Is --ftp-retry a good idea?

From: Rich Gray <rgray_at_plustechnologies.com>
Date: Wed, 3 Nov 2004 10:29:18 -0500

> Date: Tue, 2 Nov 2004 22:37:33 +0100 (CET)
> From: Daniel Stenberg <daniel-curl_at_haxx.se>
>
> On Tue, 2 Nov 2004, Rich Gray wrote:
>
>> So I guess --max-time should be per attempt. --retry-max-time would
>> then be a way to set an outer time limit.
>
> Yeps, this is what I'll do.
>
>> Another thought on retries is the retry interval. I'd make the case
>> that instead of a simple inter-retry interval, one might set an
>> interval based on the start time of each attempt. --retry-interval
>> could specify that a retry would be attempted no more often than the
>> given number of seconds.
>
> Why is that a better approach? I think of the delay as a period of
> time when we do nothing to allow the remote server to get less loaded
> and thus be able to provide service again. Then I think it makes
> sense to stay away from the server a period, not limiting requests to
> a fixed interval.

I'm making an assumption that most errors will happen well within the
--retry-interval, so there will be time between attempts. If a failure
goes longer than --retry-interval and/or --max-time, I'm presuming it is
a timeout condition, so nobody has really been doing any work anyway.
The flaw in this would be if curl and the server were locked in a "heated
discussion". The immediate retry would indeed provide no respite between
attempts. But one is kind of screwed anyway if things go into a hard loop
or one simply does not allow enough time for the work to get done.

>> curl ... --max-time 60 --retry-interval 30 --retry-max-time 600 ...
>
> In my head, I think of a system where --retry-max-time sets the
> maximum time allowed, but the --retry still sets the amount of
> requests to attempt. So that
>
> curl --max-time 60 --retry 40 --retry-max-time 600
>
> ... will stop either after 40 retries or after 10 minutes, whichever
> comes first.

I still think my approach is more "stable" in terms of the retry interval,
whether errors are happening immediately or attempts are timing out on
--max-time. Assuming your default back off of one second doubled to a
max of 10 min, for attempts which fail immediately (call it one sec.) or
60 seconds, the above example would attempt at:

Failure in 1 sec Failure in 60
Attempt Attempt
@time delay @time delay
      0 1 0 1
      2 2 61 2
      5 4 123 4
     10 8 187 8
     19 16 255 16
     36 32 331 32
     69 64 423 64
    134 128 547 128
    263 256 time's up
    520 512
    time's up (hope I did the math right)

What I proposed would smoothly retry every 30-60 seconds across
the --max-retry-time interval. Hmmm, looks like the real problem
here is the exponential back off. For a "fast" error, the attempts
beat on the server during the first minute, then hardly happen at
all. For "slow" errors, attempts occur at long intervals which just
get longer. Note that in the "slow" case, the first few retries
provide very small sleeps relative to the time making the attempt.
If one presumes that the cURL and server were beating on each other
the whole time, it's not much different than my --retry-interval
proposal. (I would in fact modify my --retry-interval proposal
to include a mandatory 2-5 secs. of sleep between attempts.)
The backoff is predominating here. --retries will not come into
play except for small values. (Ever hear the story of the
grateful king who agreed to reward his subject a "chessboard
of grain"? One grain on the first square, two on the next,
four on the next, 8, 16, 32, ... :)

Cheers!
Rich

p.s. Why does the archive render in proportional font?
The above table will look crappy.
Received on 2004-11-03