curl / Mailing Lists / curl-library / Single Mail
Buy commercial curl support from WolfSSL. We help you work out your issues, debug your libcurl applications, use the API, port to new platforms, add new features and more. With a team lead by the curl founder himself.

Re: H3/QUIC flow control/buffering problem and suggestion

From: Dmitry Karpov via curl-library <curl-library_at_lists.haxx.se>
Date: Mon, 9 Jan 2023 23:45:07 +0000

> While this might be the case right now for one/some/all of the backends, I don't think this is something that is carved in stone. curl uses the buffer in one way, but there's nothing that prevents the QUIC stacks to use their own buffers much in the same way TCP does in the kernel.

Exactly! That's why I proposed to use a new "CURLOPT_QUIC_BUFFERSIZE" option for that purpose.
This option would control QUIC buffering for different backends separating it from the CURLOPT_BUFFERSIZE option, which is currently used for limiting the max amount of data to be sent in the "write callbacks".

> Sure, but none of that is carved in stone either and may vary depending on what backends you use or even what versions of the involved components. Also, just using different servers and network speeds can make them vary quite a lot as well.

That's right. But the CURLOPT_BUFFERSIZE currently just limits how much data Curl sends to the "write callbacks", and it is not coupled with the underlying transport layer buffering except for H3/QUIC via ngtcp2.
The ngtcp2 seems to rely on client to provide a buffer for transport layer, but the other backends may use their internal buffers like kernel does for TCP sockets.

> This sounds like you're talking about something more than just a buffer size option though, a separate buffering system somehow.

Right, I am talking about two separate "buffering" systems - transport layer internal buffering (socket buffer etc.) and Curl's reading buffering.
In the essence, Curl's reading buffering is an intermediate layer between transport layer and the end client.

The transport layer buffering is supposed to buffer incoming protocol data regardless of whether the client reads it (or reads it not fast enough) and perform transport flow-control.
On the other hand, Curl's reading buffering is supposed to read as much data as it can from the transport layer into internal buffer and then dispatch it to the client using data chunks.

For TCP, Curl's internal read buffer can be very small (16KB), but still provide a very good performance with a much bigger buffer in TCP layer set via socket option.
For QUIC, I think we should do the same - keep Curl's internal read buffer separate from the QUIC transport buffer.

It just happened that for H3 with ngtcp2, Curl's read buffer and QUIC transport buffer are the same, but other QUIC backends may provide their own specific internal buffers,
and thus, Curl's reading buffer may not be used for transport buffering with those backends - so increasing it will not help to improve H3 performance.

Thanks,
Dmitry Karpov


-----Original Message-----
From: Daniel Stenberg <daniel_at_haxx.se>
Sent: Monday, January 9, 2023 1:57 PM
To: Dmitry Karpov via curl-library <curl-library_at_lists.haxx.se>
Cc: Dmitry Karpov <dkarpov_at_roku.com>
Subject: [EXTERNAL] Re: H3/QUIC flow control/buffering problem and suggestion

On Mon, 9 Jan 2023, Dmitry Karpov via curl-library wrote:

> In H3, Curl's internal buffering also controls QUIC internal
> buffering,

While this might be the case right now for one/some/all of the backends, I don't think this is something that is carved in stone. curl uses the buffer in one way, but there's nothing that prevents the QUIC stacks to use their own buffers much in the same way TCP does in the kernel.

> In other words, to get good download performance with H3, client will
> have to increase curl handle read buffer size and be ready to handle
> much larger data blocks in less frequent "write callback" calls
> compared to H2/H3. This may force client apps to have different
> handlings of H3 and H1/H2 caused by changed "write callback" frequency and increased data size.

Sure, but none of that is carved in stone either and may vary depending on what backends you use or even what versions of the involved components. Also, just using different servers and network speeds can make them vary quite a lot as well.

> The existing separation between Curl handle's internal buffering and
> TCP transport layer buffering in H1/H2 made me think about a special "QUIC"
> option controlling QUIC buffering and flow control, while keeping the
> "read buffer size" option separate from that - so as in H1/H2 it will
> only control the frequency and size of data blocks used in the "write callback" calls.

> And while I think that your change is very helpful to have big
> download buffers in H3, we may still probably need to think about if
> we want to use CURLOPT_BUFFERSIZE option to control H3/QUIC transport
> buffering, considering the side effects I mentioned above.

This sounds like you're talking about something more than just a buffer size option though; a separate buffering system somehow.

But for sure, I'm open for discussing, testing and experimenting with different approaches if such new ways can give us better performance or other benefits that users will appreciate!

-- 
  / daniel.haxx.se
  | Commercial curl support up to 24x7 is available!
  | Private help, bug fixes, support, ports, new features
  | https://curl.se/support.html
-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html
Received on 2023-01-10