libcurl's throttling option can trigger its own transfer stall detection
Date: Tue, 25 Feb 2020 20:02:23 +0000
Hi all,
I'm not completely sure that I'm interpreting this correctly, but it 
seems that setting the "throttling" option for uploads falls foul of the 
"low speed limit".
Take the following:
```
#include <iostream>
#include <algorithm>
#include <cstddef>
#include <ctime>
#include <curl/curl.h>
// Not sure who's defining this on Windows but it isn't me
#undef min
std::size_t dataSource(char* targetBuffer, const std::size_t size, const 
std::size_t nitems, void*)
{
     const std::size_t bytesRequested = size * nitems;
     // Choose how much arbitrary data to send
     const std::size_t bytesSent = bytesRequested;
     // Use this instead and it'll work fine!
     //const std::size_t bytesSent = std::min(bytesRequested, 100ull);
     std::cerr << time(NULL) << ": Sending " << bytesSent << " bytes (" 
<< bytesRequested << " requested)\n";
     for (std::size_t i = 0; i < bytesSent; i++)
         targetBuffer[i] = (char)0xDE;
     return bytesSent;
}
int main()
{
     curl_global_init(CURL_GLOBAL_ALL);
     CURL* easyHandle = curl_easy_init();
     if (!easyHandle)
     {
         std::cerr << time(NULL) << ": Failed to initialise easy 
handle\n";
         return 1;
     }
     char errorBuf[CURL_ERROR_SIZE] = {};
     try
     {
         const curl_off_t  max_upload_bytes_per_sec = 100;
         const std::time_t low_speed_time_secs = 20;
         const long        low_speed_bytes_per_sec = 1;
         if (curl_easy_setopt(easyHandle, CURLOPT_ERRORBUFFER, errorBuf) 
!= CURLE_OK)
             throw std::runtime_error("curl_easy_setopt(CURLOPT_POST) 
failed");
         if (curl_easy_setopt(easyHandle, CURLOPT_MAX_SEND_SPEED_LARGE, 
max_upload_bytes_per_sec) != CURLE_OK)
             throw 
std::runtime_error("curl_easy_setopt(CURLOPT_MAX_SEND_SPEED_LARGE) 
failed");
         if (curl_easy_setopt(easyHandle, CURLOPT_LOW_SPEED_TIME, 
low_speed_time_secs) != CURLE_OK)
             throw 
std::runtime_error("curl_easy_setopt(CURLOPT_LOW_SPEED_TIME) failed");
         if (curl_easy_setopt(easyHandle, CURLOPT_LOW_SPEED_LIMIT, 
low_speed_bytes_per_sec) != CURLE_OK)
             throw 
std::runtime_error("curl_easy_setopt(CURLOPT_LOW_SPEED_LIMIT) failed");
         // This is just a page that accepts as much data as it's given, 
then
         // at EOF reports the number of bytes read.
         if (curl_easy_setopt(easyHandle, CURLOPT_URL, 
"https://www.example.com/blackhole.php") != CURLE_OK)
             throw std::runtime_error("curl_easy_setopt(CURLOPT_URL) 
failed");
         if (curl_easy_setopt(easyHandle, CURLOPT_POSTFIELDS, NULL) != 
CURLE_OK)
             throw 
std::runtime_error("curl_easy_setopt(CURLOPT_POSTFIELDS) failed");
         if (curl_easy_setopt(easyHandle, CURLOPT_POST, 1) != CURLE_OK)
             throw std::runtime_error("curl_easy_setopt(CURLOPT_POST) 
failed");
         if (curl_easy_setopt(easyHandle, CURLOPT_READFUNCTION, 
&dataSource) != CURLE_OK)
             throw 
std::runtime_error("curl_easy_setopt(CURLOPT_READFUNCTION) failed");
         std::cerr << time(NULL) << ": Performing request...\n";
         const CURLcode res = curl_easy_perform(easyHandle);
         if (res == CURLE_OK)
         {
             std::cerr << time(NULL) << ": OK!\n";
         }
         else
         {
             std::cerr << time(NULL) << ": Failed: " << errorBuf << '\n';
         }
     }
     catch (std::exception & e)
     {
         std::cerr << time(NULL) << ": Exception: " << e.what() << '\n';
     }
     curl_easy_cleanup(easyHandle);
     curl_global_cleanup();
     return 0;
}
```
First, the read callback is invoked with a request for 65,524 bytes.
If I give it 100 bytes, everything works fine. The callback is invoked 
again in around a second's time and the program continues on streaming 
data forever.
But, if I give it 65,524 bytes, the callback is not invoked again -- I 
guess because there is no need for new data yet; most of the fist batch 
is still sat inside libcurl waiting to be sent. Nothing else actually 
happens for 20 seconds because of the 100 bytes/sec throttling option. 
Then, the request _fails_ with:
> Operation too slow. Less than 1 bytes/sec transferred the last 20 
> seconds
Presumably this is because libcurl sent 2,000 bytes up front, then had 
nothing to do for the rest of the 20 seconds.
Is there something unexpected here? Or is this how it should work? I 
guess I expected the speed limit detector to be inhibited while the 
upload is being throttled.
Per the above, if I cap the number of bytes written by `dataSource` to 
100 then I work around the issue. But I'd rather not "poison" that 
callback with knowledge about the throttling limit, if I can help it.
This is curl v7.68.0, reproduced on macOS Mojave & Windows 10.
Cheers
---
Addendum:
Here's the source for the PHP blackhole, if you're interested:
```
<?php
// Just accepts data forever
if (($stream = fopen('php://input', 'r')) !== FALSE)
{
	$n = 0;
	while (!feof($stream))
	{
		$str = fread($stream, 8192);
		$n += strlen($str);
	}
	print "Read $n bytes\n";
}
else
{
	print "Failed to open input stream";
}
fclose($stream);
?>
```
-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html
Received on 2020-02-25