cURL / Mailing Lists / curl-library / Single Mail


Re: Massive HTTP/2 parallel requests

From: Molina <>
Date: Wed, 2 Mar 2016 16:38:54 +0100


When trying the h2load again the file is downloaded in a 64-stream connection, like in my test.
Again I tried this time with 1024 connection and it’s a bit faster than HTTP/1.1, but opening mourned 350 connections, which is what we are supposed to avoid in HTTP/2.0.

So, back to only one connection it takes ten times more time than with plain HTTP/1.1, supposing all streams are admitted by the server, but being downloaded 64 by 64. If they are not admitted to the stream the main CURL loop shows them idle competing for a spot in the connection, and heavily increasing the CPU time.


> On 02 Mar 2016, at 16:21, Lucas Pardue <> wrote:
>> The results of curl —version are:
>> curl 7.47.1 (x86_64-apple-darwin15.3.0) libcurl/7.47.1 OpenSSL/1.0.2f
>> zlib/1.2.5 nghttp2/1.7.0
>> Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3
>> pop3s rtsp smb smbs smtp smtps telnet tftp
>> Features: IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets
>> Concerning the server, I’m running an apache with mod_http2 enabled and
>> the following HTTP/2-related configuration:
>> LoadModule http2_module modules/ <IfModule
>> http2_module>
>> Protocols h2c http/1.1
>> LogLevel http2:info
>> </IfModule>
>> I performed this very morning another test adding the following line:
>> H2MaxSessionStreams 1000000
>> I was just to make sure that the number of streams is higher than the
>> number of parallel downloads for my test (1024). I realised looking at the logs
>> that all my requests fit in the pipe, but still files are downloaded in 64-stream
>> blocks! If I try with normal HTPP/1.1 connections the total test is around 20
>> times faster than this way.
>> On a second test I set the following parameters:
>> curl_multi_setopt(multi_handle, CURLMOPT_MAX_TOTAL_CONNECTIONS,
>> 10);
>> curl_multi_setopt(multi_handle, CURLMOPT_MAX_HOST_CONNECTIONS,
>> 10);
>> And what happens is quite annoying: it opened the connection 0 for the file
>> 1, and consecutive connections 1-9 for files 2-10. The rest 11-1024 where
>> attached to connection 0, so that when connections 1-9 finished (with only
>> one request) they were not reused.
>> Any clues?
>> José
> Opening more than one HTTP/2 connection is a SHOULD NOT according to Section 9.1 of the HTTP/2 specification, so your client behaviour seems a bit out of keeping with what is expected.
> Have you enabled HTTP/2 multiplexing in the single connection as described by CURLOPT_PIPELINING ( <>). Doing so should limit the number of connections to 1, ignoring any value of CURLMOPT_MAX_HOST_CONNECTIONS.
> To eliminate a server issue I would suggest testing the sever with h2load ( <>), an example of 1 client making 1024 concurrent requests is:
> h2load -c1 -n1024 -m1024 http://{server}/{file}
> Does it also show the 64-stream behaviour?
> Lucas
> -------------------------------------------------------------------
> List admin: <>
> Etiquette: <>

List admin:

Received on 2016-03-02