Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

curl_easy_pause() unpausing delivers data too fast #9410

Closed
ssdbest opened this issue Sep 1, 2022 · 6 comments
Closed

curl_easy_pause() unpausing delivers data too fast #9410

ssdbest opened this issue Sep 1, 2022 · 6 comments

Comments

@ssdbest
Copy link

ssdbest commented Sep 1, 2022

I did this

I am trying to pause the http(HTTP 1.1) connection in the write call back using curl_easy_pause() API. The API returns success and does not invoke write call back. Once resumed, I see write call back delivering more data per second then what is being configured using CURLOPT_MAX_RECV_SPEED_LARGE. This issue has been reported here: #3240 and fix available in 7.69.1, but it does not seems to be working.

I have tried returning CURL_WRITEFUNC_PAUSE inside write callback. Same result.

I expected the following

I don't expect any data exchange between server and client till resume is called. Even after resuming, expecting CURLOPT_MAX_RECV_SPEED_LARGE data size to be delivered not more than that.

And also would like to know what is the maximum time we can pause the http connection.

curl/libcurl version

Currently using 7.69.1 and tested with latest also 7.84.0

[curl -V output]

operating system

Ubuntu 20.04

@bagder
Copy link
Member

bagder commented Sep 4, 2022

The title says "not pausing" but the description says it is pausing and the problem is that it sends data too fast when unpaused. So isn't the title then completely wrong?

@ssdbest
Copy link
Author

ssdbest commented Sep 5, 2022

@bagder I see that only invoking of write call back from libcurl is paused. I believe that the underlying socket still receives data from server which is accumulated in socket buffer(Fix from #3240 looks like we are just skipping the read from socket, not pausing data exchange between the client and http server, not sure if it is possible in any way). So when we un-pause the connection, we are receiving more data probably because of all accumulated in the socket buffer.

@bagder
Copy link
Member

bagder commented Sep 5, 2022

The question is still what you say the problem is. Is it like the title says or is it like your description?

@bagder
Copy link
Member

bagder commented Sep 5, 2022

Reading the code, we don't have any consideration for CURLOPT_MAX_RECV_SPEED_LARGE when a transfer is unpaused and the cached data is delivered to the application. I think I would rather document this as a known behavior than trying to fix it, because it is going to be a rather kludgy fix - unpausing the transfer but still not actually receiving data because the draining of the cache isn't complete.

@ssdbest
Copy link
Author

ssdbest commented Sep 5, 2022

@bagder The title can go with the description. Please feel free to change it.
So, shall I assume that there wont be any fix on this in near term ?

The use case, I have is depending on the available bandwidth, I need to pause/resume data transfer till the bandwidth available. Please provide me any solution/pointers if you know of.

@bagder bagder changed the title libcurl: curl_easy_pause() not pausing the data over connection curl_easy_pause() unpausing delivers data too fast Sep 5, 2022
bagder added a commit that referenced this issue Sep 5, 2022
Reported-by: ssdbest on github
Fixes #9410
@bagder
Copy link
Member

bagder commented Sep 5, 2022

If you use HTTP/2 or HTTP/3, pause cannot be instant and it will continue to cache received data for a window-full amount of bytes.

The delivering of data faster than CURLOPT_MAX_RECV_SPEED_LARGE when unpaused is getting documented in #9430.

@bagder bagder removed the needs-info label Sep 5, 2022
@bagder bagder closed this as completed in 5162ba0 Sep 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging a pull request may close this issue.

2 participants