Re: Memory leak with curl_multi_socket_action
Date: Mon, 25 May 2020 19:31:58 -0400
On Mon, May 25, 2020 at 7:16 PM James Read <jamesread5737_at_gmail.com> wrote:
>
>
>
> On Tue, May 26, 2020 at 12:02 AM Jeffrey Walton <noloader_at_gmail.com> wrote:
>>
>> On Mon, May 25, 2020 at 6:27 PM James Read via curl-library
>> <curl-library_at_cool.haxx.se> wrote:
>> >
>> > ...
>> >
>> > Gmail seems to have taken out all the formatting. Apologies. It should still compile though.
>>
>> I can't speak for others, but... You should probably reduce the code
>> to a minimal reproducer, and then put it on GitHub or another place
>> folks can 'git clone' and then 'make'.
>
>
> git clone https://github.com/JamesRead5737/libcurlmemoryleak.git
>
> No need to make. Just compile with gcc crawler.c -g -lssl -lcurl
> Run valgrind with valgrind -v --tool=memcheck --leak-check=full --show-reachable=yes --track-origins=yes --log-file=memcheck.log ./a.out
>
> This should reproduce what I've been talking about.
The program has been running for about 10 minutes and it still has not ended:
Parsed sites: 0, 367 parallel connections, 364 still running
I think you should reduce the program to a minimal reproducer.
This does not look correct (to me) in new_conn:
conn->easy = curl_easy_init();
I thought you should only call curl_easy_init() once for the
application, not each thread(?).
Jeff
-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette: https://curl.haxx.se/mail/etiquette.html
Received on 2020-05-26