curl-library
HTTPs connection dropped on SSL_ERROR_WANT_WRITE
Date: Sat, 5 Mar 2016 09:25:51 +0000
[Bug Summary]
Making pipelined HTTPs requests to a web server with a custom HTTPs client built on top of libcurl with openssl for TLS/SSL layer.
Whenever SSL_write() returns SSL_ERROR_WANT_WRITE or SSL_ERROR_WANT_READ, the connection gets broken and a new connection is established.
[Repro Steps]
1. Version details ((HTTPs client side)
libcurl: 7.47.1
Platform: FreeBSD 9.2 64 bit
Openssl: 1.0.1m
2. Create a multi-handle with the following settings:
CURLMOPT_MAX_HOST_CONNECTIONS = 1
CURLMOPT_PIPELINING = 1
CURLMOPT_MAX_PIPELINE_LENGTH = 50
3. Create 50 easy curl handles for making HTTPs requests to the *same* HTTPs web server
4. The issue will be easier to reproduce if TLS renegotiation is not disabled on both sides
5. Add all easy handles to the multi-handle
6. Run the multi-handle to completion
7. Simultaneously take a packet capture at the client side
[Expected Behaviour]
A single TCP connection is established to the web server and all requests are pipelined over it
[Actual Behaviour]
The packet capture shows that a connection gets established, some requests are completed on it, then it is RSTed from the client side and a fresh connection is established.
In effect, several connections can get established and torn down before all requests can be completed
[Impact]
Several thousand clients connect to a single web server for our business application.
The client and server are designed to maintain a single persistent connection for a long time and pipeline all requests over HTTPs.
Each connection setup and tear-down is an expensive operation for the server.
The existing behaviour results in too many connection setup/teardown operations thus overloading the server.
[Analysis]
1. The libcurl implementation uses non-blocking sockets with multiple requests pipelined over the same connection.
2. It maintains a two-way association between requests and connections (sockets)
3. It maintains queues of reader and writer requests and the socket is “awarded” in FIFO order to each request to perform its pending read/write operation
4. If a read/write operation did not complete because the socket was not ready (EWOULDBLOCK), the socket can still be taken away from the request and awarded to the next one in queue
- An EWOULDBLOCK in the “DO” phase results in an HTTPs request directly moving to the “DONE” phase
- The next request in the queue can then be awarded the socket
5. For plain HTTP, this has no functional impact; the request that could not complete gets pushed back to the back of the queue and will be retried when its turn comes next
6. However, this behaviour is incorrect from OpenSSL PoV due to the way SSL_write() works (https://www.openssl.org/docs/manmaster/ssl/SSL_write.html)
"If the underlying BIO is non-blocking, SSL_write() will also return, when the underlying BIO could not satisfy the needs of SSL_write() to continue the operation. In this case a call to SSL_get_error with the return value of SSL_write() will yield SSL_ERROR_WANT_READ or SSL_ERROR_WANT_WRITE …… When an SSL_write() operation has to be repeated because of SSL_ERROR_WANT_READ or SSL_ERROR_WANT_WRITE, it must be repeated with the same arguments.”
7. Obviously, after an SSL_ERROR_WANT_XXX, due to lib curl’s behaviour in point (5), the SSL_write(…) gets invoked on a different request (i.e. with different arguments) resulting in a write failure that leads to the connection being aborted.
[Suggested Fix]
Introduce a “DOING” phase for HTTPs. I have prototype code for this fix and it seems to be working fine for our use-case.
I’d like the members’ opinion on whether this is the right approach to the issue. If members agree I can post the fix for review here.
[Demo client code snippet to illustrate the issue]
multi_handle = curl_multi_init();
curl_multi_setopt(multi_handle, CURLMOPT_MAX_HOST_CONNECTIONS, 1);
curl_multi_setopt(multi_handle, CURLMOPT_PIPELINING, 1);
curl_multi_setopt(multi_handle, CURLMOPT_MAX_PIPELINE_LENGTH, PIPELINE_LENGTH);
for(i=0; i < PIPELINE_LENGTH; i++) {
CURL *http_handle;
http_handle = curl_easy_init();
/* set the options (I left out a few, you'll get the point anyway) */
curl_easy_setopt(http_handle, CURLOPT_URL, “https://www.example.com/<random_url>");
/* add the individual transfers */
curl_multi_add_handle(multi_handle, http_handle);
http_handles[i] = http_handle;
}
curl_multi_perform(multi_handle, &still_running);
do {
struct timeval timeout;
int rc; /* select() return code */
CURLMcode mc; /* curl_multi_fdset() return code */
fd_set fdread;
fd_set fdwrite;
fd_set fdexcep;
int maxfd = -1;
long curl_timeo = -1;
FD_ZERO(&fdread);
FD_ZERO(&fdwrite);
FD_ZERO(&fdexcep);
/* set a suitable timeout to play around with */
timeout.tv_sec = 1;
timeout.tv_usec = 0;
curl_multi_timeout(multi_handle, &curl_timeo);
if(curl_timeo >= 0) {
timeout.tv_sec = curl_timeo / 1000;
if(timeout.tv_sec > 1)
timeout.tv_sec = 1;
else
timeout.tv_usec = (curl_timeo % 1000) * 1000;
}
/* get file descriptors from the transfers */
mc = curl_multi_fdset(multi_handle, &fdread, &fdwrite, &fdexcep, &maxfd);
if(mc != CURLM_OK)
{
fprintf(stderr, "curl_multi_fdset() failed, code %d.\n", mc);
break;
}
/* On success the value of maxfd is guaranteed to be >= -1. We call
select(maxfd + 1, ...); specially in case of (maxfd == -1) there are
no fds ready yet so we call select(0, ...) --or Sleep() on Windows--
to sleep 100ms, which is the minimum suggested value in the
curl_multi_fdset() doc. */
if(maxfd == -1) {
struct timeval wait = { 0, 100 * 1000 }; /* 100ms */
rc = select(0, NULL, NULL, NULL, &wait);
}
else {
/* Note that on some platforms 'timeout' may be modified by select().
If you need access to the original value save a copy beforehand. */
rc = select(maxfd+1, &fdread, &fdwrite, &fdexcep, &timeout);
}
switch(rc) {
case -1:
/* select error */
still_running = 0;
printf("select() returns error, this is badness\n");
break;
case 0:
default:
/* timeout or readable/writable sockets */
curl_multi_perform(multi_handle, &still_running);
break;
}
} while(still_running);
curl_multi_cleanup(multi_handle);
for(i=0; i < PIPELINE_LENGTH; i++)
curl_easy_cleanup(http_handles[i]);
Regards,
- Ameya
-------------------------------------------------------------------
List admin: https://cool.haxx.se/list/listinfo/curl-library
Etiquette: https://curl.haxx.se/mail/etiquette.html
Received on 2016-03-05