Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http2.c assert due to lack of stream drain (can only repro with facebook) #1286

Closed
jay opened this issue Feb 23, 2017 · 11 comments
Closed

http2.c assert due to lack of stream drain (can only repro with facebook) #1286

jay opened this issue Feb 23, 2017 · 11 comments
Labels

Comments

@jay
Copy link
Member

jay commented Feb 23, 2017

I did this

curl -v -L https://facebook.com

It's redirected to location www.facebook.com for the second connection, and then after receiving the content it asserts:

Assertion failed: httpc->drain_total >= data->state.drain, file ..\..\..\..\lib\http2.c, line 1572

I can see that http2_handle_stream_close isn't being called before the connection is redirected to www.facebook.com, and data->state.drain retains the drain value from the previous stream.

Code path for the final http2_recv call of the first connection (facebook.com) looks like this:

PERFORM calling Curl_readwrite
Curl_readwrite(connectdata *, Curl_easy *, bool *, bool *)
http2_recv 	libcurld.dll!http2_recv() 
	libcurld.dll!Curl_read() 
	libcurld.dll!readwrite_data() 
	libcurld.dll!Curl_readwrite() 
	libcurld.dll!multi_runsingle() 
	libcurld.dll!curl_multi_perform() 
	libcurld.dll!easy_transfer() 
	libcurld.dll!easy_perform() 
	libcurld.dll!curl_easy_perform() 
	curld.exe!operate_do() 
	curld.exe!operate() 
	curld.exe!main() 
	curld.exe!__tmainCRTStartup() 
	curld.exe!mainCRTStartup() 
	kernel32.dll!@BaseThreadInitThunk@12() 
	ntdll.dll!___RtlUserThreadStart@8() 
	ntdll.dll!__RtlUserThreadStart@8() 
http2_recv begin: httpc->drain_total: 0, data->state.drain: 0

And in that call it follows this code path. Of note nghttp2_session_mem_recv call or some sub sets httpc->drain_total: 2, data->state.drain: 2; and stream->memlen != 0; however stream->closed is true so data->state.drain is not reset.

So then there is the new connection to www.facebook.com and the initial Curl_http2_setup to set httpc->drain_total to 0 which as far as I can tell is correct, because each connection should have its own drain total.

The next http2_recv call has httpc->drain_total: 0, data->state.drain: 2 which is wrong, that's the holdover since basically it should never be data->state.drain > httpc->drain_total. It continues to work by some luck to get content from facebook and then asserts after that when the stream drain is attempted for www.facebook.com. At that time httpc->drain_total: 2, data->state.drain: 4. (Normally at this point we would expect 2 and 2).

Based on this I think either an obligatory flush is missing somewhere, or maybe better to handle this so when an easy handle is disassociated from a stream we reset the individual drain. Does it actually need to be drained at that point? I'm guessing Tatsuhiro will probably have a better solution but for now I'm doing this:

diff --git a/lib/http2.c b/lib/http2.c
index af69c72..d3eb40e 100644
--- a/lib/http2.c
+++ b/lib/http2.c
@@ -2000,6 +2000,8 @@ CURLcode Curl_http2_setup(struct connectdata *conn)

   httpc->pause_stream_id = 0;
   httpc->drain_total = 0;
+  if(conn->data)
+    conn->data->state.drain = 0;

   conn->bits.multiplex = TRUE; /* at least potentially multiplexed */
   conn->httpversion = 20;

I can only reproduce this with facebook.

I expected the following

a happy ending

curl/libcurl version

master curl-7_53_0-2-gb259646

curl 7.53.0-DEV (i386-pc-win32) libcurl/7.53.0-DEV OpenSSL/1.0.2k nghttp2/1.19.0
Protocols: dict file ftp ftps gopher http https imap imaps ldap pop3 pop3s rtsp smb smbs smtp smtps telnet tftp
Features: AsynchDNS Debug TrackMemory Largefile NTLM SSL HTTP2 HTTPS-proxy

operating system

Windows 7 x64

/cc @tatsuhiro-t

@jay jay added the HTTP/2 label Feb 23, 2017
@tatsuhiro-t
Copy link
Contributor

I can reproduce this issue with Linux desktop.
So curl does not reuse connection between facebook.com and www.facebook.com. I guess probably because the hostname is not the same.

I originally thought that conn->data is initialized for each fresh connection. That is the case where curl makes second connection to www.facebook.com. But it seems it is not true.. is that correct? How much part of conn->data is reused among different connections?

@bagder
Copy link
Member

bagder commented Feb 24, 2017

curl does not reuse connection between facebook.com and www.facebook.com. I guess probably because the hostname is not the same

Exactly. curl reuses connection based on host names - and some other criteria, but the names need to match.

I originally thought that conn->data is initialized for each fresh connection

The conn->data points to a Curl_easy struct and there's one such struct for every individual transfer. So even when the same connection is used for two transfers there are two Curl_easy structs.

In this case however, it should be two connections and two Curl_easy structs for all I can think of.

(I too can reproduce the problem but I've not yet found time to debug it.)

@tatsuhiro-t
Copy link
Contributor

Thank you. So curl reuses the same Curl_easy struct for these 2 connections. I checked that conn->data pointer in 2 connections, and they are the same.
If this is correct, the fix proposed by @jay is reasonable to me.

@jay
Copy link
Member Author

jay commented Feb 24, 2017

Two questions/concerns:

Imagine if the second conn then had another easy handle assigned to it to stream from the same connection, the Curl_http2_setup isn't going to be called again for that. So if that easy handle has a bad drain count it isn't reset.

Thinking back on the first connection, its drain total is probably still wrong (I didn't check this but I assume) and that connection is still open and could have other easy handles / streams associated with it. Then even if their stream drain count was correct the total drain count for the connection would be higher than expected and (I assume) always want to drain?

I am thinking this could manifest itself other ways, and what I submitted is like more a band aid after the accident.. wrong?

@bagder
Copy link
Member

bagder commented Feb 24, 2017

Imagine if the second conn then had another easy handle assigned to it to stream from the same connection, the Curl_http2_setup isn't going to be called again for that

Oh but it is, here: https://github.com/curl/curl/blob/master/lib/http.c#L1800 ?

@jay
Copy link
Member Author

jay commented Feb 25, 2017

touche sir. Ok but isn't the point of keeping a drain total per connection not to reset it on each request?

@tatsuhiro-t
Copy link
Contributor

Meanwhile, I'm just wondering how to call http2_handle_stream_close in this redirect case as well.

@bagder
Copy link
Member

bagder commented Feb 25, 2017

isn't the point of keeping a drain total per connection not to reset it on each request?

Curl_http2_setup() has "protection" that makes it not do very much on subsequent calls though, like the drain total initialization.

@bagder
Copy link
Member

bagder commented Feb 25, 2017

Meanwhile, I'm just wondering how to call http2_handle_stream_close in this redirect case as well.

If the end of the stream isn't received and libcurl decides to act and follow the redirect then there's a risk that the end of stream sits unread on the previous connection. That's part of the old known problem that libcurl doesn't really "monitor" connections in the connection cache fully and risk that libcurl will consider that connection dead when it goes back to possibly reuse it. That's mentioned in KNOWN_BUGS 1.9 "HTTP/2 frames while in the connection pool kill reuse"

@tatsuhiro-t
Copy link
Contributor

Thank you for the pointer. I just thought that HTTP/2 transfer is just like HTTP/1 chunked but known content length. curl handles 0 length chunk case; I mean content is 0 length, but curl has to parse chunked encoding. We can follow similar execution path to call http2_recv even if content length is 0. http2_recv then calls http2_handle_stream_close.

After some code reading, the following patch also fixes this issue:

diff --git a/lib/http.c b/lib/http.c
index 8db86cd84..957a4b739 100644
--- a/lib/http.c
+++ b/lib/http.c
@@ -3242,7 +3242,12 @@ CURLcode Curl_http_readwrite_headers(struct Curl_easy *data,
 
         /* If max download size is *zero* (nothing) we already
            have nothing and can safely return ok now! */
-        if(0 == k->maxdownload)
+        if(0 == k->maxdownload
+#if defined(USE_NGHTTP2)
+           && !((conn->handler->protocol & PROTO_FAMILY_HTTP) &&
+                conn->httpversion == 20)
+#endif
+           )
           *stop_reading = TRUE;
 
         if(*stop_reading) {

This is yet another special casing of HTTP/2. So I'm not so much happy, but it works.
I'm wondering we may still have the similar cases to deal with.

@tatsuhiro-t
Copy link
Contributor

I made a PR using the suggested fix above.

@bagder bagder closed this as completed in 475c258 Mar 7, 2017
@lock lock bot locked as resolved and limited conversation to collaborators May 6, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Development

Successfully merging a pull request may close this issue.

3 participants