Buy commercial curl support from WolfSSL. We help you work
out your issues, debug your libcurl applications, use the API, port to new
platforms, add new features and more. With a team lead by the curl founder
himself.
Re: Large download speed deviations for CURLOPT_MAX_RECV_SPEED_LARGE in 8.4.0
- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]
From: Dmitry Karpov via curl-library <curl-library_at_lists.haxx.se>
Date: Tue, 5 Dec 2023 20:14:48 +0000
> That might just not be a very effective limit after it already has reached the limit. Maybe we can improve this logic?
I am not sure how to improve the logic with minimal and simple changes.
In order to get as close to the speed limit as possible, we need to call Curl_pgrsLimitWaitTime() as frequent as possible.
This will allow to make "wait time" corrections smaller and more precise.
Any simple hacks like "data->set.max_recv_speed / 4" may work in some cases, but there always can be combinations of socket/read buffers where it will not work well,
which will make sudden speed deviations larger than expected and difficult to reproduce.
It is probably possible to monitor the download speed and call Curl_pgrsLimitWaitTime() inside the readwrite_data(),
but I think the simplest solution is just to break the loop when some speed limit is set.
In this case, the client which sets a speed limit is more concerned about its precision than any additional performance gains which back-to-back
reading from the socket may bring. And this approach provides a good consistency across different socket buffer sizes.
Back-to-back reading from the socket is good when there is no speed limit, and client wants to get its data as fast as possible,
but for the speed limit case it creates too big inaccuracy in calculating wait times, which makes it less usable for such cases.
Thanks,
Dmitry Karpov
-----Original Message-----
From: Daniel Stenberg <daniel_at_haxx.se>
Sent: Tuesday, December 5, 2023 2:03 AM
To: Dmitry Karpov via curl-library <curl-library_at_lists.haxx.se>
Cc: Dmitry Karpov <dkarpov_at_roku.com>
Subject: [EXTERNAL] Re: Large download speed deviations for CURLOPT_MAX_RECV_SPEED_LARGE in 8.4.0
On Tue, 5 Dec 2023, Dmitry Karpov via curl-library wrote:
> In the do-while loop, we keep reading from socket until we can, so
> with relatively big socket buffers we can read a lot of data before we
> can have a chance to check for throttling again
It does initialize the max_recv like this:
curl_off_t max_recv = data->set.max_recv_speed?
data->set.max_recv_speed : CURL_OFF_T_MAX;
... so it *is* limited. That might just not be a very effective limit after it already has reached the limit. Maybe we can improve this logic?
Even just getting a smaller portion of the max speed could probably be a better take:
curl_off_t max_recv = data->set.max_recv_speed?
data->set.max_recv_speed / 4 : CURL_OFF_T_MAX;
Date: Tue, 5 Dec 2023 20:14:48 +0000
> That might just not be a very effective limit after it already has reached the limit. Maybe we can improve this logic?
I am not sure how to improve the logic with minimal and simple changes.
In order to get as close to the speed limit as possible, we need to call Curl_pgrsLimitWaitTime() as frequent as possible.
This will allow to make "wait time" corrections smaller and more precise.
Any simple hacks like "data->set.max_recv_speed / 4" may work in some cases, but there always can be combinations of socket/read buffers where it will not work well,
which will make sudden speed deviations larger than expected and difficult to reproduce.
It is probably possible to monitor the download speed and call Curl_pgrsLimitWaitTime() inside the readwrite_data(),
but I think the simplest solution is just to break the loop when some speed limit is set.
In this case, the client which sets a speed limit is more concerned about its precision than any additional performance gains which back-to-back
reading from the socket may bring. And this approach provides a good consistency across different socket buffer sizes.
Back-to-back reading from the socket is good when there is no speed limit, and client wants to get its data as fast as possible,
but for the speed limit case it creates too big inaccuracy in calculating wait times, which makes it less usable for such cases.
Thanks,
Dmitry Karpov
-----Original Message-----
From: Daniel Stenberg <daniel_at_haxx.se>
Sent: Tuesday, December 5, 2023 2:03 AM
To: Dmitry Karpov via curl-library <curl-library_at_lists.haxx.se>
Cc: Dmitry Karpov <dkarpov_at_roku.com>
Subject: [EXTERNAL] Re: Large download speed deviations for CURLOPT_MAX_RECV_SPEED_LARGE in 8.4.0
On Tue, 5 Dec 2023, Dmitry Karpov via curl-library wrote:
> In the do-while loop, we keep reading from socket until we can, so
> with relatively big socket buffers we can read a lot of data before we
> can have a chance to check for throttling again
It does initialize the max_recv like this:
curl_off_t max_recv = data->set.max_recv_speed?
data->set.max_recv_speed : CURL_OFF_T_MAX;
... so it *is* limited. That might just not be a very effective limit after it already has reached the limit. Maybe we can improve this logic?
Even just getting a smaller portion of the max speed could probably be a better take:
curl_off_t max_recv = data->set.max_recv_speed?
data->set.max_recv_speed / 4 : CURL_OFF_T_MAX;
-- / daniel.haxx.se | Commercial curl support up to 24x7 is available! | Private help, bug fixes, support, ports, new features | https://curl.se/support.html -- Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library Etiquette: https://curl.se/mail/etiquette.htmlReceived on 2023-12-05