curl / Mailing Lists / curl-library / Single Mail
Buy commercial curl support from WolfSSL. We help you work out your issues, debug your libcurl applications, use the API, port to new platforms, add new features and more. With a team lead by the curl founder himself.

crash seen in multi_socket

From: Kunal Ekawde via curl-library <>
Date: Sat, 11 May 2019 22:26:57 +0530


I'm using libcurl - 7.64.0 with nghttp2 for http2 call flow. For http/1.1
this crash is not seen.

I see following crashes:
0!multi_socket + 0x129
1!curl_multi_socket_action + 0x25
2!HttpClientManager::SocketEventTriggered(int, unsigned
int) [ : 348 + 0x5]

 0!__GI__IO_fwrite + 0x1e
 1!Curl_debug + 0x8d
 2!Curl_infof + 0x12c
 3!http2_conncheck + 0xc2
 4!extract_if_dead + 0x3b
 5!Curl_connect + 0x216b
 6!multi_runsingle + 0x577
 7!multi_socket + 0x27d
 8!curl_multi_socket_action + 0x25

1. Initiate multiple client request on http2 with prior knowledge(from
application using library) e.g. 4 streams on same connection using multi -
async interface.
2. http2 server responds to only one and dies off.

On initial analysis:
for #1 case:

Crash point:
   In multi_socket:
     /* the socket can be shared by many transfers, iterate */
      for(e = list->head; e; e = e->next) {
        data = (struct Curl_easy *)e->ptr;
           if(data->magic != CURLEASY_MAGIC_NUMBER)

    accessing data results in trap.

Tried with fix based on understanding:
1. multi_socket() -> does multi_runsingle() where disconnect is detected
and tries to reconnect.
2. singlesocket() does sh_delentr().
3. Curl_llist_remove() does:

  e->ptr = NULL;

  e->prev = NULL;

  e->next = NULL;
4. During the next iteration, as prev is set to NULL, it finds data NULL

Please correct if my analysis is wrong.
I tried with following fix:
      /* the socket can be shared by many transfers, iterate */
      for(e = list->head; e; e = e->next) {
        data = (struct Curl_easy *)e->ptr;
        // crash fix - temp
        if (!data)
by with #1 was fixed as per scenario. #2 trap was using this patch with a
minimal load run, updated the patch to 'return result' instead of continue.

While this would just be defensive fix, the root cause fix could be
elsewhere also, request to please check / comment.


Received on 2019-05-11