curl / Mailing Lists / curl-users / Single Mail
Buy commercial curl support from WolfSSL. We help you work out your issues, debug your libcurl applications, use the API, port to new platforms, add new features and more. With a team lead by the curl founder himself.

Memory "leak" when using gnutls as SSL library

From: David Guillen Fandos via curl-users <>
Date: Wed, 20 Nov 2019 20:35:34 +0100

Hello there!

I think this topic has been covered already in some other emails I found
in the archive, however I was still experiencing this myself the other
day. So the way I got here was by running a couple of heap profilers
(the first time I saw gnutls library being the culprit for the memory
usage I assumed google perf tools were just lying but ... no) that
claimed that some certificate function in gnutls had allocated more than
130MB of RAM for just a couple or three HTTPS requests.

After seeing this I switched to libcurl-openssl which didn't do anything
like that, it seems the SSL bits themselves were using 1 or 2 MB at
most. So I started debugging out of curiosity since to me this is really
a bug. So after some time reading the sources and with the help of the
backtrace from heaptrack [1] I think I came to the conclusion that:

Using nonblocking (this is the multi interface? I think the non-multi
are also using nonblocking) API results in the SSL backend being also
used in a non-blocking way. The nonblocking code seems to go through the
gtls_connect_step1 path which seems to me like it could be initializing
"cred" (BACKEND->cred) more than once, which under the hood is a
calloc/malloc memory allocation. On the other hand openssl seems to be
checking for a non-NULL pointer in the CTX and freeing when appropriate.

So just theorizing here, if we were to call this many times per
connection (I guess potentially every time you receive some bytes from
the socket during the TLS handshake to try and parse the ongoing frame)
it would re-initialize
the cred pointer, missing some memory along the way. However I fail to
understand how come valgrind and other tools are happy with this. I
thought gnutls_malloc might be reference counting or something and
properly release all memory during deinit but I don't think this is the
case, so all what I just said could be just my imagination, but still
doesn't explain how come it takes 130MB to do a couple SSL requests (to
the same host, so I assume they are probably reused by the cache!).

Thank you very much for your help

1: heaptrack trace:

1168634 calls to allocation functions with 120.44MB peak consumption from
  in /usr/lib/x86_64-linux-gnu/
111720 calls with 10.72MB peak consumption from:
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
      in /usr/lib/x86_64-linux-gnu/
Received on 2019-11-20