curl / Mailing Lists / curl-users / Single Mail

curl-users

libcurl http2 connection reuse

From: lali .cpp via curl-users <curl-users_at_cool.haxx.se>
Date: Wed, 10 Jan 2018 13:29:53 +0530

Hi,

I am using libcurl's multi interface(library version *7.57*) along with
epoll for getting data in parallel from n http2 servers(assume that n = 2).
I want to give a total of say 200 ms to receive all the responses(from n
http2 servers), after which I process the responses. Find attached a file
that contains the code that uses multi_socket_action API with epoll.
Ouput of curl_version()* libcurl/7.57.0 OpenSSL/1.0.2g zlib/1.2.8
c-ares/1.10.0 nghttp2/1.29.0*

Here is how I use it :-
1) Create a multi handle and set options on it.
2) Create n easy handles
3) Set options on the easy handles
4) Add the n easy handles in the multi handle
5) Call multi_socket_action with CURL_SOCKET_TIMEOUT to start everything. I
wait a maximum of 200ms for the entire process using epoll.
6) Call curl_multi_remove_handle on each of the n easy handles.
7) *Goto Step 3*.

If all responses received within 200ms, I process them, else(atleast one of
them could not complete) we process the ones we received, i.e 200 ms is the
maximum time I want my application to wait to get response from a
designated server, after which I continue to work with responses I received.

Now the problem is that whenever a server is unable to send a response in
200ms(lets call this event a timeout), libcurl tearsdown the underlying TCP
connection established with this server, which adds the overhead of
connection establishment(which increases a lot considering that I am using
https) for a subsequent transaction with the same server.

Technically, when the *curl_multi_remove_handle* function is called for the
easy_handle the connection is in not in the state *CURLM_STATE_COMPLETED* and
is marked "*premature*"(premature = (data->mstate < CURLM_STATE_COMPLETED)
? TRUE : FALSE;) and this premature connection is then closed in the
function *multi_done*.

Here is the comment before calling *Curl_disconnect* on the connection in
the function *multi_done*
/*
     if premature is TRUE, it means this connection was said to be DONE
before
     the entire request operation is complete and thus we can't know in what
     state it is for re-using, so we're forced to close it. In a perfect
world
     we can add code that keep track of if we really must close it here or
not,
     but currently we have no such detail knowledge.
*/

I can understand why connection teardown is needed in HTTP/1.1, but why
can't this behaviour be done away in HTTP/2 considering that HTTP/2
supports multiple streams(multiplexed) over the same connection; so
*theoretically* I can just "*discard*" the old stream(which had timed out
from my application's perspective and could contain old/invalid data) and
start a new stream for subsequent transactions on the same ESTABLISHED
connection instead of tearing down the TCP connection and bearing the
overhead of connection establishment again(which is certainly prohibitive
in my case). Is this change technically difficult or not supported by the
protocol? Request you to elaborate the reasoning as I am not a networking
expert. Please feel free to correct me.

Regards
kartik

-----------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-users
Etiquette: https://curl.haxx.se/mail/etiquette.html

Received on 2018-01-10