cURL / Mailing Lists / curl-library / Single Mail


Re: Massive HTTP/2 parallel requests

From: Molina <>
Date: Fri, 11 Mar 2016 11:46:56 +0100

After a deep search and a parallel email chain with the mod_http2 developers I think found the solution for this:

The problem was in the window sizes and tcp buffers. Concretely I had to increase the tcp buffer in the following way (on OS X 10.11.3):

sudo sysctl -w net.inet.tcp.recvspace=$(( 1024*1024*2 )) # 2MB
sudo sysctl net.inet.tcp.autorcvbufmax=16777216

After this tuning both HTTP/1.1 and HTTP/2.0 increased their corresponding download speed. After that I tried with the h2o server two tests using HTTP/1.1 and HTTP/2:

Download a big file (1GB): both protocols had the same download speed, which was expected.
Download in parallel 10000 10KB files: in HTTP/1.1 it took 14 seconds, while in HTTP/2 it took 7.

Hence in long distances and with many parallel downloads HTTP/2 performed better, which is expected.
However, when performing both tests it was necessary to use the nghttp utility to increase the per-connection window size to 16MB and the per-stream window size to 1MB, having 16 parallel streams.

I of course also tried to do the same with libcurl, but I did not find any way of doing it with the current API. My questions are:

Perhaps I was missing a parameter to actually increase the window size?
Would it be possible to manually use libnghttp2 to increase the window size? In case it is, would doing so interfere with libcurl?
In case there is no parameter to increase the window size, is expected to add it in future versions?


> On 02 Mar 2016, at 16:59, Rainer Jung <> wrote:
> Am 02.03.2016 um 15:30 schrieb Molina:
>> The results of curl —version are:
>> curl 7.47.1 (x86_64-apple-darwin15.3.0) libcurl/7.47.1 OpenSSL/1.0.2f zlib/1.2.5 nghttp2/1.7.0
>> Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
>> Features: IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets
>> Concerning the server, I’m running an apache with mod_http2 enabled and the following HTTP/2-related configuration:
>> LoadModule http2_module modules/
>> <IfModule http2_module>
>> Protocols h2c http/1.1
>> LogLevel http2:info
>> </IfModule>
>> I performed this very morning another test adding the following line: H2MaxSessionStreams 1000000
>> I was just to make sure that the number of streams is higher than the number of parallel downloads for my test (1024). I realised looking at the logs that all my requests fit in the pipe, but still files are downloaded in 64-stream blocks! If I try with normal HTPP/1.1 connections the total test is around 20 times faster than this way.
> Does setting H2MaxWorkers on the Apache side help? Also note, that the Apache module is still experimental and in rapid development. Stefan publishes in trunk but also in <>. The github one is more recent and gets reintegrated into Apache every now and then. So if you suspect a bug in the module, please also try the github variant.
> The github version has the following info in the entries for version 1.1.0:
> * H2SessionExtraFiles are now shared between all sessions in the same process.
> It now works like this:
> H2MaxWorkers * H2SessionFiles is assumed to be the total number of file
> handles that can be safely used for HTTP/2 transfers without other parts
> of the server running out of handles. This number is shared between all
> HTTP/2 connections in the same server process.
> The default is set to 2. With H2MaxWorkers on most platforms/mpms
> defaulting to 25 that gives a maximum of 50 handles that can be invoved in
> transfers.
> I think I have to write a blog post one day of how this works and affects
> performance. tl;dr the more handles http2 may use, the faster static files
> can be served.
> See also "H2SessionExtraFiles " on <>
> For questions on mod_http2 you might reach out to the httpd dev list. Stefan is quite responsive to questions about mod_http2.
> One might wonder though, which use case you want to support. Serving 1000 concurrent requests in a single connection creates a conflict between fairness - answering all requests at the same pace - and efficiency - answering them using the smallest amount of resources.
> Regards,
> Rainer
> -------------------------------------------------------------------
> List admin: <>
> Etiquette: <>

List admin:
Received on 2016-03-11