curl-library
Re: [patch] preliminary high_speed_limit patch
Date: Mon, 12 Jun 2006 14:08:03 +0200 (CEST)
On Mon, 12 Jun 2006, peter wrote:
Thanks for your patch!
> I haven't been able to subscribe to the mailing list yet... this is what I
> would send if I could:
Please do join and continue this thread on the list, since we'll get many more
eyes on and comments about your work. Also, you won't have to rely on my
presence and feedback.
I'm ccing this reply over to the list since I intend to write a reply with
details I think others can enjoy or possibly comment on.
> I'm not a libcurl developer, I just want one feature incorporated, and then
> be on my way (libcurl already has everything else I want in a transfer
> support library :-) Here is a first draft of a patch to add bandwidth
> limiting to libcurl. This function is sometimes called 'throttling' or
> 'bandwidth limiting'. The purpose is to limit the maximum transfer rate used
> by a single connection, so that multiple connections can co-exist on a link
> without each trying to saturate it. I know the cli already has throttling,
> but I want api interface (will be using in python.)
I agree that the feature has been wanted and has been missed by other users in
the past so I appreciate your work on this!
> -- added -Z to cli set high_speed_limit (just for testing, someone might
> have a better name later...)
What's wrong with simply using the --limit-rate that already is there? There's
hardly any idea to offer the same feature implemented twice, is it?
> -- used all the data available from the progress module and stuck an
> if statement before the calls to Curl_readwrite.
> -- in transfer.c ... if too fast, then usleep (single streamed) &
> curl_pgrsUpdate
usleep() is not a portable function and cannot be used unconditionally like
that. Compare with the code used for this purpose in src/main.c.
Also, if you are you doing it with a sleep why not do a calculation for how
long you should sleep and do a single sleep() call instead? It would save an
awful lot of CPU used in vain. Again, that is how the command line tool logic
works.
> -- in multi.c ... if too fast, then skip your turn. (curl_pgrsUpdate,
> then break)
Whoa, that's not a good solution. If a socket is found readable by select()
and curl_multi_perform() just returning will still have the socket readable
and any ordinary app will busy-loop like mad.
No, the multi interface needs to make sure that the socket is not re-triggered
in select() until a certain time has elapsed.
Also, I would think that we want two different limits for upload and download.
-- -=- Daniel Stenberg -=- http://daniel.haxx.se -=- ech`echo xiun|tr nu oc|sed 'sx\([s\x]\)\([\xoi]\)xo un\2\1 is xg'`olReceived on 2006-06-12