cURL / Mailing Lists / curl-library / Single Mail

curl-library

Re: curl_multi_info_read() returning result of CURLE_RECV_ERROR

From: Ray Satiro via curl-library <curl-library_at_cool.haxx.se>
Date: Wed, 4 Nov 2015 15:00:50 -0500

On 11/4/2015 6:54 AM, KS Lee wrote:
> ​ Thank you, Ray, for the reply.​
>
>
> gmane.comp.web.curl.library by Ray Satiro via curl-library / 2h
> // keep unread // hide // preview
> That you have the same issue with WinSSL as you do OpenSSL leads me to
> believe this isn't a problem with either. You don't have this problem
> without SSL?
>
>
> ​ The connection absolutely requires SSL, so it was a use-case we're
> stuck with. We have used WinSSL, OpenSSL, LibreSSL - ended with the
> same i.e. worked with a proxy in between, but failed with
> CURLE_RECV_ERROR without the proxy.
> ​
>
> I apologize if this was answered already but I can only
> find part of the thread now. Is it possible that the reason you don't
> see an issue over a proxy like Fiddler is because you didn't use it
> enough to observe the issue there? In other words just because it
> appears to work fine through Fiddler a few times isn't the same as
> your
> normal use.
>
>
> ​ We also convinced IT Security Audit to allow the proxy connection in
> the interim using libcurl built with OpenSSL 1.0.2d. Has been running
> fine for weeks already. Once the proxy is removed, bam, the error
> returns.
>
> This is not a solution that our auditors like at all, and are pressing
> us to remove the unauthorised proxy as soon as possible.
>
> You wrote in another e-mail "Wireshark traces for the non-proxy
> scenario, it is showing a TCP disconnecting from libcurl, and then
> reconnecting at every HTTP interaction with the web server. This is
> normal behaviour, since keep-alive was not set." [1] Maybe the
> constant
> reconnection is why the issue can be more readily observed without a
> proxy.
>
>
> ​ Yes, that was the hunch we went with.
>
> When you say keep alive are you referring to TCP keepalive or
> sending an actual Keep-Alive in the header which shouldn't be
> necessary
> with HTTP/1.1 since keepalive is the default?
>
>
> We were wondering whether the proxy shielded the reconnections from
> the web server.​ So, in some of our test cases, we enabled the
> libcurl KEEP-ALIVE option and re-ran the test with the app that ran
> fine with a proxy. Again, same problem.
>
>
> ​ So, what's so special about putting a proxy service in between the
> libcurl-calling app, and the HTTPS web server? That has stumped us.
>
> At this point, we're willing to try anything.
>

Are multiple server IPs involved? Maybe there is a problem with some
specific IP and/or the IP is not used by the proxy. Also I notice you
are using IGNORE_CONTENT_LENGTH, why? Try disabling that and see what
happens. The way that works is exactly what it says, it's going to
ignore the content length. Although I don't see this in your logs
consider the following:

- Client makes a request
- Server replies without chunked encoding or connection close.
- Client ignores content length and because there's no other indicator
(chunked, close) to go on it basically just waits around staying connected.
- Server (or something in between) severs the connection unclean.
- Client starts new connection and repeat

Now suppose that happens n times and meets some threshold where now
connections that have the chunked reply get severed that way too. The
proxy masks the sever and that's why you don't see a recv error. In
other words the proxy gets the recv error and gives you a clean close.
In your wireshark logs when you used the proxy did you capture those
packets as well? Was it a Fiddler proxy on localhost or something ?
Check for this scenario.

If you have to ignore content length you can try requesting that the
server close the connection with connection close which means sending a
header of "Connection: close". Really I wouldn't do that unless you have
to though because it sends performance down the drain. You might also
try disabling 100 expect, because your server may be too impatient for
the delay while libcurl waits for that.

So try disabling ignore_content_length. If that doesn't work and you
find you have to ignore then find out why, be really sure, and then you
could try this:

   struct curl_slist *header_list = NULL;
   header_list = curl_slist_append(header_list, "Expect:");
   header_list = curl_slist_append(header_list, "Connection: close");
   curl_easy_setopt(curl, CURLOPT_HTTPHEADER, header_list);

You could also try switching to HTTP/1.0 just to see what happens.
curl_easy_setopt(curl, CURLOPT_HTTP_VERSION, CURL_HTTP_VERSION_1_0);

-------------------------------------------------------------------
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette: http://curl.haxx.se/mail/etiquette.html
Received on 2015-11-04