Buy commercial curl support. We
help you work out your issues, debug your libcurl applications, use the API,
port to new platforms, add new features and more. With a team lead by the
curl founder Daniel himself.
libcurl 8.16.0 spawning large number of getaddrinfo threads?
- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]
From: Vadim Grinshpun via curl-library <curl-library_at_lists.haxx.se>
Date: Thu, 16 Oct 2025 19:45:47 -0400
Hi folks,
I'm working on a project that makes use of libcurl.
After upgrading libcurl from 8.14.1 to 8.16.0, I've noticed an
unexpected behavior change.
Specifically, I'm observing a large number of curl_thread_create_thunk
threads, all of which
appear to have a backtrace like the one shown below.
Since the only thing that changed is the libcurl version,
I want to double-check a few things:
- are there any known issues in 8.16.0 that might have symptoms like this?
- has anything changed between 8.14.1 and 8.16.1 that might lead to this
behavior?
- is it possible to misuse libcurl in a way that can lead to this behavior?
Thanks for any suggestions!
-Vadim
Stack traces:
Hundreds of threads look like this, providing a couple of examples below:
Thread 953 (Thread 0x7f620bbf6640 (LWP 4084852) "ddc_beacon_rece"):
#0 futex_wait (private=0, expected=2, futex_word=0x7f640ac23040 <lock>)
at ../sysdeps/nptl/futex-internal.h:146
#1 __GI___lll_lock_wait_private (futex=futex_at_entry=0x7f640ac23040
<lock>) at ./nptl/lowlevellock.c:34
#2 0x00007f640ab43dc4 in __check_pf
(seen_ipv4=seen_ipv4_at_entry=0x7f620bbf1516,
seen_ipv6=seen_ipv6_at_entry=0x7f620bbf1517,
in6ai=in6ai_at_entry=0x7f620bbf1528,
in6ailen=in6ailen_at_entry=0x7f620bbf1530) at
../sysdeps/unix/sysv/linux/check_pf.c:307
#3 0x00007f640ab0bd61 in __GI_getaddrinfo (name=<optimized out>,
service=<optimized out>, service_at_entry=0x7f620bbf1cfc "8482",
hints=<optimized out>, hints_at_entry=0x7f638ebb1008,
pai=pai_at_entry=0x7f620bbf1c10) at ../sysdeps/posix/getaddrinfo.c:2446
#4 0x0000561dbba888b8 in Curl_getaddrinfo_ex (nodename=<optimized out>,
servname=servname_at_entry=0x7f620bbf1cfc "8482",
hints=hints_at_entry=0x7f638ebb1008, result=result_at_entry=0x7f638ebb1000) at
curl_addrinfo.c:122
#5 0x0000561dbba7a808 in getaddrinfo_thread
(arg=arg_at_entry=0x7f638ebb0fc0) at asyn-thrdd.c:245
#6 0x0000561dbba88fff in curl_thread_create_thunk (arg=<optimized out>)
at curl_threads.c:57
#7 0x00007f640aa94ac3 in start_thread (arg=<optimized out>) at
./nptl/pthread_create.c:442
#8 0x00007f640ab269d0 in clone3 () at
../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
...
Thread 11 (Thread 0x7f63f4572640 (LWP 2016127) "ddc_beacon_rece"):
#0 futex_wait (private=0, expected=2, futex_word=0x7f640ac23040 <lock>)
at ../sysdeps/nptl/futex-internal.h:146
#1 __GI___lll_lock_wait_private (futex=futex_at_entry=0x7f640ac23040
<lock>) at ./nptl/lowlevellock.c:34
#2 0x00007f640ab43dc4 in __check_pf
(seen_ipv4=seen_ipv4_at_entry=0x7f63f456d516,
seen_ipv6=seen_ipv6_at_entry=0x7f63f456d517,
in6ai=in6ai_at_entry=0x7f63f456d528,
in6ailen=in6ailen_at_entry=0x7f63f456d530) at
../sysdeps/unix/sysv/linux/check_pf.c:307
#3 0x00007f640ab0bd61 in __GI_getaddrinfo (name=<optimized out>,
service=<optimized out>, service_at_entry=0x7f63f456dcfc "8482",
hints=<optimized out>, hints_at_entry=0x7f6406422348,
pai=pai_at_entry=0x7f63f456dc10) at ../sysdeps/posix/getaddrinfo.c:2446
#4 0x0000561dbba888b8 in Curl_getaddrinfo_ex (nodename=<optimized out>,
servname=servname_at_entry=0x7f63f456dcfc "8482",
hints=hints_at_entry=0x7f6406422348, result=result_at_entry=0x7f6406422340) at
curl_addrinfo.c:122
#5 0x0000561dbba7a808 in getaddrinfo_thread
(arg=arg_at_entry=0x7f6406422300) at asyn-thrdd.c:245
#6 0x0000561dbba88fff in curl_thread_create_thunk (arg=<optimized out>)
at curl_threads.c:57
#7 0x00007f640aa94ac3 in start_thread (arg=<optimized out>) at
./nptl/pthread_create.c:442
#8 0x00007f640ab269d0 in clone3 () at
../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
Possibly relevant: there's another thread that's in the process of
destroying an object,
which leads to this (abbreviated) stack trace:
Thread 3 (Thread 0x7f64093ff640 (LWP 1898221) "ddc_beacon_rece"):
#0 __futex_abstimed_wait_common64 (private=128, cancel=true,
abstime=0x0, op=265, expected=3821886, futex_word=0x7f622fb54910) at
./nptl/futex-internal.c:57
#1 __futex_abstimed_wait_common (cancel=true, private=128, abstime=0x0,
clockid=0, expected=3821886, futex_word=0x7f622fb54910) at
./nptl/futex-internal.c:87
#2 __GI___futex_abstimed_wait_cancelable64
(futex_word=futex_word_at_entry=0x7f622fb54910, expected=3821886,
clockid=clockid_at_entry=0, abstime=abstime_at_entry=0x0,
private=private_at_entry=128) at ./nptl/futex-internal.c:139
#3 0x00007f640aa96624 in __pthread_clockjoin_ex
(threadid=140059683931712, thread_return=thread_return_at_entry=0x0,
clockid=clockid_at_entry=0, abstime=abstime_at_entry=0x0,
block=block_at_entry=true) at ./nptl/pthread_join_common.c:105
#4 0x00007f640aa964c3 in ___pthread_join (threadid=<optimized out>,
thread_return=thread_return_at_entry=0x0) at ./nptl/pthread_join.c:24
#5 0x0000561dbba8910a in Curl_thread_join
(hnd=hnd_at_entry=0x7f638eb9ac80) at curl_threads.c:95
#6 0x0000561dbba7acbf in asyn_thrdd_await
(data=data_at_entry=0x7f6407663800, addr_ctx=0x7f638eb9ac80,
entry=entry_at_entry=0x0) at asyn-thrdd.c:557
#7 0x0000561dbba7ad00 in Curl_async_thrdd_destroy
(data=data_at_entry=0x7f6407663800) at asyn-thrdd.c:586
#8 0x0000561dbba7a41d in Curl_async_destroy
(data=data_at_entry=0x7f6407663800) at asyn-base.c:214
#9 0x0000561dbba6678e in Curl_close (datap=datap_at_entry=0x7f64093fa8b8)
at url.c:303
#10 0x0000561dbba4610c in curl_easy_cleanup (ptr=<optimized out>) at
easy.c:870
(additional application-specific stack levels not shown).
Date: Thu, 16 Oct 2025 19:45:47 -0400
Hi folks,
I'm working on a project that makes use of libcurl.
After upgrading libcurl from 8.14.1 to 8.16.0, I've noticed an
unexpected behavior change.
Specifically, I'm observing a large number of curl_thread_create_thunk
threads, all of which
appear to have a backtrace like the one shown below.
Since the only thing that changed is the libcurl version,
I want to double-check a few things:
- are there any known issues in 8.16.0 that might have symptoms like this?
- has anything changed between 8.14.1 and 8.16.1 that might lead to this
behavior?
- is it possible to misuse libcurl in a way that can lead to this behavior?
Thanks for any suggestions!
-Vadim
Stack traces:
Hundreds of threads look like this, providing a couple of examples below:
Thread 953 (Thread 0x7f620bbf6640 (LWP 4084852) "ddc_beacon_rece"):
#0 futex_wait (private=0, expected=2, futex_word=0x7f640ac23040 <lock>)
at ../sysdeps/nptl/futex-internal.h:146
#1 __GI___lll_lock_wait_private (futex=futex_at_entry=0x7f640ac23040
<lock>) at ./nptl/lowlevellock.c:34
#2 0x00007f640ab43dc4 in __check_pf
(seen_ipv4=seen_ipv4_at_entry=0x7f620bbf1516,
seen_ipv6=seen_ipv6_at_entry=0x7f620bbf1517,
in6ai=in6ai_at_entry=0x7f620bbf1528,
in6ailen=in6ailen_at_entry=0x7f620bbf1530) at
../sysdeps/unix/sysv/linux/check_pf.c:307
#3 0x00007f640ab0bd61 in __GI_getaddrinfo (name=<optimized out>,
service=<optimized out>, service_at_entry=0x7f620bbf1cfc "8482",
hints=<optimized out>, hints_at_entry=0x7f638ebb1008,
pai=pai_at_entry=0x7f620bbf1c10) at ../sysdeps/posix/getaddrinfo.c:2446
#4 0x0000561dbba888b8 in Curl_getaddrinfo_ex (nodename=<optimized out>,
servname=servname_at_entry=0x7f620bbf1cfc "8482",
hints=hints_at_entry=0x7f638ebb1008, result=result_at_entry=0x7f638ebb1000) at
curl_addrinfo.c:122
#5 0x0000561dbba7a808 in getaddrinfo_thread
(arg=arg_at_entry=0x7f638ebb0fc0) at asyn-thrdd.c:245
#6 0x0000561dbba88fff in curl_thread_create_thunk (arg=<optimized out>)
at curl_threads.c:57
#7 0x00007f640aa94ac3 in start_thread (arg=<optimized out>) at
./nptl/pthread_create.c:442
#8 0x00007f640ab269d0 in clone3 () at
../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
...
Thread 11 (Thread 0x7f63f4572640 (LWP 2016127) "ddc_beacon_rece"):
#0 futex_wait (private=0, expected=2, futex_word=0x7f640ac23040 <lock>)
at ../sysdeps/nptl/futex-internal.h:146
#1 __GI___lll_lock_wait_private (futex=futex_at_entry=0x7f640ac23040
<lock>) at ./nptl/lowlevellock.c:34
#2 0x00007f640ab43dc4 in __check_pf
(seen_ipv4=seen_ipv4_at_entry=0x7f63f456d516,
seen_ipv6=seen_ipv6_at_entry=0x7f63f456d517,
in6ai=in6ai_at_entry=0x7f63f456d528,
in6ailen=in6ailen_at_entry=0x7f63f456d530) at
../sysdeps/unix/sysv/linux/check_pf.c:307
#3 0x00007f640ab0bd61 in __GI_getaddrinfo (name=<optimized out>,
service=<optimized out>, service_at_entry=0x7f63f456dcfc "8482",
hints=<optimized out>, hints_at_entry=0x7f6406422348,
pai=pai_at_entry=0x7f63f456dc10) at ../sysdeps/posix/getaddrinfo.c:2446
#4 0x0000561dbba888b8 in Curl_getaddrinfo_ex (nodename=<optimized out>,
servname=servname_at_entry=0x7f63f456dcfc "8482",
hints=hints_at_entry=0x7f6406422348, result=result_at_entry=0x7f6406422340) at
curl_addrinfo.c:122
#5 0x0000561dbba7a808 in getaddrinfo_thread
(arg=arg_at_entry=0x7f6406422300) at asyn-thrdd.c:245
#6 0x0000561dbba88fff in curl_thread_create_thunk (arg=<optimized out>)
at curl_threads.c:57
#7 0x00007f640aa94ac3 in start_thread (arg=<optimized out>) at
./nptl/pthread_create.c:442
#8 0x00007f640ab269d0 in clone3 () at
../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
Possibly relevant: there's another thread that's in the process of
destroying an object,
which leads to this (abbreviated) stack trace:
Thread 3 (Thread 0x7f64093ff640 (LWP 1898221) "ddc_beacon_rece"):
#0 __futex_abstimed_wait_common64 (private=128, cancel=true,
abstime=0x0, op=265, expected=3821886, futex_word=0x7f622fb54910) at
./nptl/futex-internal.c:57
#1 __futex_abstimed_wait_common (cancel=true, private=128, abstime=0x0,
clockid=0, expected=3821886, futex_word=0x7f622fb54910) at
./nptl/futex-internal.c:87
#2 __GI___futex_abstimed_wait_cancelable64
(futex_word=futex_word_at_entry=0x7f622fb54910, expected=3821886,
clockid=clockid_at_entry=0, abstime=abstime_at_entry=0x0,
private=private_at_entry=128) at ./nptl/futex-internal.c:139
#3 0x00007f640aa96624 in __pthread_clockjoin_ex
(threadid=140059683931712, thread_return=thread_return_at_entry=0x0,
clockid=clockid_at_entry=0, abstime=abstime_at_entry=0x0,
block=block_at_entry=true) at ./nptl/pthread_join_common.c:105
#4 0x00007f640aa964c3 in ___pthread_join (threadid=<optimized out>,
thread_return=thread_return_at_entry=0x0) at ./nptl/pthread_join.c:24
#5 0x0000561dbba8910a in Curl_thread_join
(hnd=hnd_at_entry=0x7f638eb9ac80) at curl_threads.c:95
#6 0x0000561dbba7acbf in asyn_thrdd_await
(data=data_at_entry=0x7f6407663800, addr_ctx=0x7f638eb9ac80,
entry=entry_at_entry=0x0) at asyn-thrdd.c:557
#7 0x0000561dbba7ad00 in Curl_async_thrdd_destroy
(data=data_at_entry=0x7f6407663800) at asyn-thrdd.c:586
#8 0x0000561dbba7a41d in Curl_async_destroy
(data=data_at_entry=0x7f6407663800) at asyn-base.c:214
#9 0x0000561dbba6678e in Curl_close (datap=datap_at_entry=0x7f64093fa8b8)
at url.c:303
#10 0x0000561dbba4610c in curl_easy_cleanup (ptr=<optimized out>) at
easy.c:870
(additional application-specific stack levels not shown).
-- Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library Etiquette: https://curl.se/mail/etiquette.htmlReceived on 2025-10-17