Buy commercial curl support from
WolfSSL. We help you work out your issues, debug your libcurl
applications, use the API, port to new platforms, add new features and more.
With a team lead by the curl founder himself.
Speed throttling precision issues in the latest libcurl
- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]
From: Dmitry Karpov via curl-library <curl-library_at_lists.haxx.se>
Date: Tue, 5 Mar 2024 01:04:28 +0000
Hi All,
After running some speed throttling tests on the latest libcurl, I noticed that the speed throttling precision got lost again.
I reported this issue for 8.5.0, and it was fixed for 8.6.0 as part of https://github.com/curl/curl/commit/1da640abb6886aab822ff0c0da71b1df0ca89d0f, which avoided read looping when speed throttling was enabled
by setting curl_off_t max_recv to zero ( curl_off_t max_recv = data->set.max_recv_speed ? 0 : CURL_OFF_T_MAX; ) in readwrite_data() in lib\transfer.c.
But it looks like that part was lost during some recent readwrite_data() refactoring, and this function got back to read looping when the speed limit is set, which depending on network speed and download size can create speed deviations more than 10%.
In order to have more or less precise speed limits (<1% on average), we need to avoid read looping for cases of the speed limit, so the following change should be probably applied:
Date: Tue, 5 Mar 2024 01:04:28 +0000
Hi All,
After running some speed throttling tests on the latest libcurl, I noticed that the speed throttling precision got lost again.
I reported this issue for 8.5.0, and it was fixed for 8.6.0 as part of https://github.com/curl/curl/commit/1da640abb6886aab822ff0c0da71b1df0ca89d0f, which avoided read looping when speed throttling was enabled
by setting curl_off_t max_recv to zero ( curl_off_t max_recv = data->set.max_recv_speed ? 0 : CURL_OFF_T_MAX; ) in readwrite_data() in lib\transfer.c.
But it looks like that part was lost during some recent readwrite_data() refactoring, and this function got back to read looping when the speed limit is set, which depending on network speed and download size can create speed deviations more than 10%.
In order to have more or less precise speed limits (<1% on average), we need to avoid read looping for cases of the speed limit, so the following change should be probably applied:
--- lib/transfer.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/lib/transfer.c b/lib/transfer.c index 3ae4b61c0..3ad7904de 100644 --- a/lib/transfer.c +++ b/lib/transfer.c _at__at_ -495,7 +495,7 _at__at_ static CURLcode readwrite_data(struct Curl_easy *data, /* Observe any imposed speed limit */ if(bytestoread && data->set.max_recv_speed) { curl_off_t net_limit = data->set.max_recv_speed - total_received; - if(net_limit <= 0) + if(net_limit <= 0 || total_received) break; if((size_t)net_limit < bytestoread) bytestoread = (size_t)net_limit; -- which will break the reading loop after the first read. Thanks, Dmitry Karpov
-- Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library Etiquette: https://curl.se/mail/etiquette.htmlReceived on 2024-03-05