curl / Mailing Lists / curl-users / Single Mail
Buy commercial curl support from WolfSSL. We help you work out your issues, debug your libcurl applications, use the API, port to new platforms, add new features and more. With a team lead by the curl founder himself.

Re: Memory "leak" when using gnutls as SSL library

From: David Guillen Fandos via curl-users <curl-users_at_cool.haxx.se>
Date: Mon, 9 Dec 2019 20:37:15 +0100

Hey there!

I'm trying to reproduce this on my dev machine (Fedora 31) and while it
still shows some of the behavior, is not as bad as on the prod machine
(Ubuntu 18.04).

To summarize a bit:

Ubuntu 18.04 with GnuTLS: Peaks to 100MB+ memory under very light load.
Fedora 31 + OpenSSL: 20-24MB under test load.
Fedora 31 + GnuTLS (v3.6.8) 60-64MB under equivalent test load.

Heaptrack insists the issue has something to do with certificate stuff,
see https://imgur.com/yXpbkN5 I'm assuming memory usage differences
between the Ubuntu and Fedora machines could be due to the CA cert files
the library is loading (perhaps there's more certs in the Ubuntu machine).

So next I did what you suggested, add some prints here and there, try to
see whether libcurl might be doing something dodgy. To do this I used
the same libcurl version (7.58) on my dev machine as prod, however with
GNUTLS I had to use a different version since older versions fail to
build, some dependency is too new/old and headers changed and functions
got deprecated in the meantime :D oh well.

TLDR there seems to be no issue at
gnutls_certificate_allocate_credentials and it's always being called
with a nullptr, so that no double allocation or weirdness is happening.
I also re-rested with the latest libcurl version to ensure this
behaviour remains unchanged and it looks like it's still there.

Digging a bit more the memory usage, see: https://imgur.com/Efkl02C
It seems like it comes from gnutls_certificate_set_x509_trust_file which
is just loading all the CA-cert-related files, no biggie. I understand
the OpenSSL counterpart is SSL_CTX_load_verify_locations.

So perhaps the "issue" is that OpenSSL is just told the CA paths and it
defers loading and parsing them, perhaps even sharing this across all
contexts, whereas GnuTLS is actually loading the entire CA store in
memory for every SSL connection that's still around? And since libcurl
keeps some SSL connections alive around that could amount to some
non-trivial amount of memory that's what I was seeing in the first
place? Then it wouldn't be a bug, just an inefficiency of the library
itself due to the way certs are handled.

Hope this email makes sense :P

David

On 21/11/2019 00:08, Daniel Stenberg wrote:
> On Wed, 20 Nov 2019, David Guillen Fandos via curl-users wrote:
>
> Thanks for your report and research into this issue! (Since this is
> rather deep debugging and error tracking, it might be more suitable to
> either discuss on the curl-library list or perhaps file as an issue.)
>
>> Using nonblocking (this is the multi interface? I think the non-multi
>> are also using nonblocking) API results in the SSL backend being also
>> used in a non-blocking way.
>
> Everything internally is "multi" and everything is using non-blocking
> sockets, so for the gnutls code there's only one way of working no
> matter which libcurl API that's used.
>
>> The nonblocking code seems to go through the gtls_connect_step1 path
>> which seems to me like it could be initializing "cred" (BACKEND->cred)
>> more than once, which under the hood is a calloc/malloc memory
>> allocation. On the other hand openssl seems to be checking for a
>> non-NULL pointer in the CTX and freeing when appropriate.
>
> Can't you just add some printf() code in there to see if it truly gets
> called multiple times when you see the problem? BTW, does this
> preproduces easily for you? Like every time on all https sites you test
> or is it less likely to trigger than so?
>
> Can you tell us how to reproduce this problem? Do you perhaps have a
> suggested fix?
>
> Which gnutls version is this?
>
>> the cred pointer, missing some memory along the way. However I fail to
>> understand how come valgrind and other tools are happy with this.
>
> Yes, that's really curious. I would say that it would make it not
> actually a memory leak, but could still very well be a too much memory
> consumed error.
>
-----------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-users
Etiquette: https://curl.haxx.se/mail/etiquette.html
Received on 2019-12-09