curl / Mailing Lists / curl-library / Single Mail
Buy commercial curl support. We help you work out your issues, debug your libcurl applications, use the API, port to new platforms, add new features and more. With a team lead by the curl founder Daniel himself.

Re: ratelimits revisited

From: Dmitry Karpov via curl-library <curl-library_at_lists.haxx.se>
Date: Mon, 24 Nov 2025 21:44:26 +0000

Hi Stefan,

I tried your latest changes, and they worked much better for 16 mbps rate limit but still showed not very good results for 32 mbps.
And I observed it both for 1 MB and 5 MB files, and here are the results:

Speed limit test [LAN (~700 mbps)], iterations=5

Url=http://192.168.128.3/Data/file_1M.bin, max_speed=32000000 bps
time=24 ms, dnld=1048576 B, speed=341250020 bps, spd_diff=309250020 bps, pct=966.4 %
time=80 ms, dnld=1048576 B, speed=104421639 bps, spd_diff=72421639 bps, pct=226.3 %
time=78 ms, dnld=1048576 B, speed=106934808 bps, spd_diff=74934808 bps, pct=234.2 %
time=17 ms, dnld=1048576 B, speed=467853206 bps, spd_diff=435853206 bps, pct=1362.0 %
time=25 ms, dnld=1048576 B, speed=332195786 bps, spd_diff=300195786 bps, pct=938.1 %
--------------------------------------------------------------------
avg_deviation=745.4 %, max_deviation=1362.0 %

Speed limit test [LAN (50 mbps)], iterations=5

Url=http://192.168.128.3/Data/file_1M.bin, max_speed=32000000 bps
time=187 ms, dnld=1048576 B, speed=44637112 bps, spd_diff=12637112 bps, pct=39.5 %
time=174 ms, dnld=1048576 B, speed=47950246 bps, spd_diff=15950246 bps, pct=49.8 %
time=189 ms, dnld=1048576 B, speed=44278042 bps, spd_diff=12278042 bps, pct=38.4 %
time=190 ms, dnld=1048576 B, speed=43953011 bps, spd_diff=11953011 bps, pct=37.4 %
time=187 ms, dnld=1048576 B, speed=44704909 bps, spd_diff=12704909 bps, pct=39.7 %
--------------------------------------------------------------------
avg_deviation=41.0 %, max_deviation=49.8 %

Speed limit test [LAN (~700 mbps)], iterations=5

Url=http://192.168.128.3/Data/file_5M.bin, max_speed=32000000 bps
time=1018 ms, dnld=5242880 B, speed=41197043 bps, spd_diff=9197043 bps, pct=28.7 %
time=1031 ms, dnld=5242880 B, speed=40666557 bps, spd_diff=8666557 bps, pct=27.1 %
time=1020 ms, dnld=5242880 B, speed=41117321 bps, spd_diff=9117321 bps, pct=28.5 %
time=1028 ms, dnld=5242880 B, speed=40792249 bps, spd_diff=8792249 bps, pct=27.5 %
time=1029 ms, dnld=5242880 B, speed=40739988 bps, spd_diff=8739988 bps, pct=27.3 %
--------------------------------------------------------------------
avg_deviation=27.8 %, max_deviation=28.7 %

Url=http://192.168.128.3/Data/file_5M.bin, max_speed=32000000 bps
time=1173 ms, dnld=5242880 B, speed=35727796 bps, spd_diff=3727796 bps, pct=11.6 %
time=1186 ms, dnld=5242880 B, speed=35351385 bps, spd_diff=3351385 bps, pct=10.5 %
time=1171 ms, dnld=5242880 B, speed=35792340 bps, spd_diff=3792340 bps, pct=11.9 %
time=1172 ms, dnld=5242880 B, speed=35777075 bps, spd_diff=3777075 bps, pct=11.8 %
time=1173 ms, dnld=5242880 B, speed=35731327 bps, spd_diff=3731327 bps, pct=11.7 %
--------------------------------------------------------------------
avg_deviation=11.5 %, max_deviation=11.9 %

Thanks for the explanations how the new rate limiting mechanism work.
Based on them, I guess the problems with 32 mbps rate limit I observed are the problem with the "last chunk" which may break the rate limit accuracy significantly.

A few comments:

> I updated the PR to go down to 500ms intervals (if the rate is >= 32kb/s). This makes the last incomplete interval smaller, giving more precision.

Maybe, it would make sense to make this interval an option in the future, so clients that need a better rate limit accuracy and willing to sacrifice some CPU utilization could decrease this interval
and get better results.

> Another, silly strategy would be to hold the transfer at the end until the full second has passed. That also would give precise speed numbers, but is rather futile.

Frankly, I don't think it would be silly, and in fact I think it would be a valid approach if we downloaded the last chunk the best way we could to match the rate limit and waited a bit
at the end of transfer to "catch up" with the limit.

It all makes sense to me because in case of adaptive video streaming (my primary area of focus) the actual transfer speeds for video segments are used to make important decisions
which video bitrate to select to download and play.

In video streaming applications, the rate limiting is not just about saving the bandwidth, but it also affects how the streaming session proceeds.
And if the "last chunk" problem gives an unexpectedly high transfer speed on one video segment then the streamer may select a too high bitrate for the next segment and video playing may start stuttering.

Typically, 5% network speed deviations up from the rate limit are OK, but more than that create problems.
From this perspective, it makes sense to wait at the end of the last download chunk to match the rate limit to avoid the measured transfer speed to be too high, and thus force the streaming application to
make a wrong decision.

And probably, there are some other types of applications which rely on the measured transfer speeds to make decisions.

Thanks!
Dmitry


-----Original Message-----
From: Stefan Eissing <stefan_at_eissing.org>
Sent: Saturday, November 22, 2025 3:00 AM
To: Dmitry Karpov <dkarpov_at_roku.com>
Cc: libcurl development <curl-library_at_lists.haxx.se>
Subject: [EXTERNAL] Re: ratelimits revisited



> Am 21.11.2025 um 20:42 schrieb Dmitry Karpov <dkarpov_at_roku.com>:
>
>> How to you measure the tranfer time exactly? I'd need to reproduce your measurements to drive this forward.
>
> I use steady clock for that (provided by std::chrono::steady_clock), and I take time points at transfer start and transfer end.
> The (end - start) duration is my measured transfer time as client sees it.
>
> But you can get very similar results if you measure transfer time using curl_easy_getinfo and CURLINFO_TOTAL_TIME_T.
> I tried it, and the results are almost identical to my measurements with steady clock.

Thanks for the explanation. That makes sense then. The PR had taken the approach of "download n bytes per second", meaning at the start of each second interval, curl received up to n bytes. This results in the following:

5MB download with 2 MB/s:

+0----+1----+2----+3----
 2MB 4MB 5MB

with an overall time then of sth like 2.1 seconds and an average of 2.4 MB/s. *But while it was ongoing* the 2MB/s was exactly obeyed.

The last chunk of the download never takes the full second interval, increasing the total rate, although it stayed at 2MB/s all the time. The shorter the download is, the more this increases the total average.

I updated the PR to go down to 500ms intervals (if the rate is >= 32kb/s). This makes the last incomplete interval smaller, giving more precision.

We can make those intervals even smaller, increasing the precision more and more, but then CPU usage will increase because there are more timeouts happening. This is a tradeoff. We do not want to receive, in exterme, a single byte per microsecond for ultimate precision. There seems to be little value in that. Tradeoffs.

Another, silly strategy would be to hold the transfer at the end until the full second has passed. That also would give precise speed numbers, but is rather futile.

Cheers,
Stefan

>
> Thanks,
> Dmitry
>
>
> -----Original Message-----
> From: Stefan Eissing <stefan_at_eissing.org>
> Sent: Friday, November 21, 2025 12:25 AM
> To: Dmitry Karpov <dkarpov_at_roku.com>
> Cc: libcurl development <curl-library_at_lists.haxx.se>
> Subject: [EXTERNAL] Re: ratelimits revisited
>
>
>
>> Am 20.11.2025 um 21:45 schrieb Dmitry Karpov <dkarpov_at_roku.com>:
>>
>>> 1. How/with what do you takes the measurements?
>>
>> I have a set of download throttle tests, which is a part of our libcurl readiness tests, which we use when we upgrade libcurl.
>> The throttle test sets a rate limit via CURLOPT_MAX_RECV_SPEED_LARGE,
>> performs the transfer, and calculates the actual download speed (dividing the download size by the measured transfer time).
>
> How to you measure the tranfer time exactly? I'd need to reproduce your measurements to drive this forward.
>
> Thanks,
> Stefan

-- 
Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html
Received on 2025-11-24