curl-users
speed limit / speed time / limit rate
Date: Thu, 12 Feb 2004 15:31:38 -0800
I've run across a bit of a strange situation. I think I've shot myself in
the foot.
Let's say you invoke curl like this:
$ curl --remote-name --speed-limit 2 --speed-time 30 --limit-rate 1K <some
url where a big file lives>
I've looked at the code in my_fwrite() in src/main.c. I'm trying to figure
out if the the 1K limit can cause the speed-limit to fire and cancel the
download. It doesn't appear there's any limit on the sleep in my_fwrite.
Do these curl options basically translate to:
"If curl receives more than 31K of data in one second, it'll go to sleep for
30 seconds, triggering the speed limit and basically killing itself"
If this is right, this is quite likely to happen on a 100K ethernet link,
yes?
This is where everyone points and laughs and says "Look at the idiot!"
Is there a way to do what I want though -- where what I want is to limit the
bandwidth, but cancel if the link really is bad?
I haven't looked at where the info resides, but is it feasible/a good thing
for my_write to check the argument to --speed-time and never sleep for long
enough to trigger it?
Thanks for your help.
-DB
Received on 2004-02-13