Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

readwrite_data: loop less #12504

Closed
wants to merge 2 commits into from
Closed

readwrite_data: loop less #12504

wants to merge 2 commits into from

Conversation

bagder
Copy link
Member

@bagder bagder commented Dec 11, 2023

This function is made to loop in order to drain incoming data faster. Completely removing the loop has a measerably negative impact on transfer speeds.

Downsides with the looping include

  • it might call the progress callback much more seldom. Especially if the write callback is slow.

  • rate limiting becomes less exact

  • a single transfer might "starve out" other parallel transfers

  • QUIC timers for other connections can't be maintained correctly

The long term fix should be to remove the loop and optimize coming back to avoid the transfer speed penalty.

This fix lower the max loop count to reduce the starvation problem, and avoids the loop completely for when rate-limiting is in progress.

Ref: #12488
Ref: https://curl.se/mail/lib-2023-12/0012.html

@bagder bagder requested review from icing and jay December 11, 2023 18:44
This function is made to loop in order to drain incoming data
faster. Completely removing the loop has a measerably negative impact on
transfer speeds.

Downsides with the looping include

- it might call the progress callback much more seldom. Especially if
  the write callback is slow.

- rate limiting becomes less exact

- a single transfer might "starve out" other parallel transfers

- QUIC timers for other connections can't be maintained correctly

The long term fix should be to remove the loop and optimize coming back
to avoid the transfer speed penalty.

This fix lower the max loop count to reduce the starvation problem, and
avoids the loop completely for when rate-limiting is in progress.

Ref: #12488
Ref: https://curl.se/mail/lib-2023-12/0012.html
The test was already somewhat flaky and disabled on several platforms,
and with this new lesser-loop take even more unstable.
@jay
Copy link
Member

jay commented Dec 12, 2023

Not against this but I think to fully fix the issue the pr could be improved by adding a time constraint. Otherwise the servicing of other transfers may suffer. I'd like to know what you and @icing think about the 1ms limit I proposed in the issue thread. In theory if the data is backed up and available then it should be gobbled in less than a millisecond. If a system call or the user's callback holds things up the time constraint would kick in.

@bagder
Copy link
Member Author

bagder commented Dec 12, 2023

think about the 1ms limit I proposed

I wanted this PR to be a simple (and temporary?) fix that would limit the impact of the current design without sacrificing too much.

I figured such a time-limiting approach or calling the progress meter from within this callback etc to be more complicated for a stop-gap solution, when the actual and best fix is to avoid the looping altogether.

A time check is not bad per se, I just figure maybe an even simpler take can work for now and we then instead work on a "proper" fix going forward.

@jay
Copy link
Member

jay commented Dec 12, 2023

Ok

@bagder bagder closed this in 1da640a Dec 14, 2023
@bagder bagder deleted the bagder/transfer-less-loops branch December 14, 2023 15:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Development

Successfully merging this pull request may close these issues.

None yet

2 participants