New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve speed limits, not based on average speeds anymore #971
Conversation
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE & CURLOPT_MAX_SEND_SPEED_LARGE) were applied simply by comparing limits with the cumulative average speed of the entire transfer; While this might work at times with good/constant connections, in other cases it can result to the limits simply being "ignored" for more than "short bursts" (as told in man page). Consider a download that goes on much slower than the limit for some time (because bandwidth is used elsewhere, server is slow, whatever the reason), then once things get better, curl would simply ignore the limit up until the average speed (since the beginning of the transfer) reached the limit. This could prove the limit useless to effectively avoid using the entire bandwidth (at least for quite some time). So instead, we now use a "moving starting point" as reference, and every time at least as much as the limit as been transferred, we can reset this starting point to the current position. This gets a good limiting effect that applies to the "current speed" with instant reactivity (in case of sudden speed burst).
wouldn't it then be easier and smarter to use the |
No, it actually gives different - and less useful - results (and also, But things aren't as good then, I believe it has to do with the fact So having a "fixed" starting point to calculate the limit gives much |
I don't get that. If the transfer pauses, the But I agree that it would need to be duplicated (one for each direction) to be possible to be used for this.
So let me check if I understand your suggested algorithm (for a single direction).
Is that what you're proposing? |
Well, depends, because the "current speed" is really an average over Imagine this, with no transfer happening for the last 5s: t : speed based on amount of data from t-5 It also wouldn't get to the actual current speed right away. That is, Anyways, because the current speed is in fact the average over the last
Yes. It's a timestamp & amount of data received then, ofc, but yes FYI, this was inspired from openssh's scp, because after trying to use |
Ok, yes that all makes sense. Thanks for explaining. This approach has one additional benefit too: it will support changing the limits in run-time much better than it currently does. That is something that I keep hearing users ask for. I think it could make sense to actually document that fact as well in association with landing this patch. |
Thanks a lot! |
Speed limits (from CURLOPT_MAX_RECV_SPEED_LARGE & CURLOPT_MAX_SEND_SPEED_LARGE)
were applied simply by comparing limits with the cumulative average speed of the
entire transfer; While this might work at times with good/constant connections,
in other cases it can result to the limits simply being "ignored" for more than
"short bursts" (as told in man page).
Consider a download that goes on much slower than the limit for some time
(because bandwidth is used elsewhere, server is slow, whatever the reason), then
once things get better, curl would simply ignore the limit up until the average
speed (since the beginning of the transfer) reached the limit.
This could prove the limit useless to effectively avoid using the entire
bandwidth (at least for quite some time).
So instead, we now use a "moving starting point" as reference, and every time at
least as much as the limit as been transferred, we can reset this starting point
to the current position. This gets a good limiting effect that applies to the
"current speed" with instant reactivity (in case of sudden speed burst).