Buy commercial curl support from WolfSSL. We help you work
out your issues, debug your libcurl applications, use the API, port to new
platforms, add new features and more. With a team lead by the curl founder
himself.
Question about DNS timeout in libCurl
- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]
From: Dmitry Karpov via curl-library <curl-library_at_lists.haxx.se>
Date: Fri, 10 Dec 2021 22:22:40 +0000
Hi Daniel,
I have a question related to the discussion below.
While parallel query requests in c-ares are not there yet, I am wondering if it is possible to control DNS timeout or expose such control via curl handle options in the future?
I noticed that libcurl always uses the default c-ares timeout value,
which is 5s, and happy eyeball DNS timeout also relies on that value by hard coding it in the HAPPY_EYEBALLS_DNS_TIMEOUT constant, but in some cases, 5s may be a too long wait before switching to a different name server.
I also noticed that the older c-ares versions used some random values and the actual timeouts were less 5s, which allowed to skip over bad name servers faster than the latest c-ares versions, which have precise 5s.
Because currently there is no such option on the surface, and probably modifying libcurl and c-ares code and changing the default values is the only option to have DNS timeouts less than 5s, I am wondering, if you think that doing such changes will be a very risky change even as a short-term solution?
Thanks,
Dmitry Karpov
>
> -----Original Message-----
> From: Dmitry Karpov
> Sent: Monday, November 29, 2021 12:12 PM
> To: Daniel Stenberg <daniel_at_haxx.se>
> Subject: RE: Happy Eyeballs doesn't seem to work with c-ares when IPv6
> name servers on top of the name server list don't respond
>
>> I argue that it should, yes, but also that it should probably send the query to several (up to N) resolvers at once, so maybe do two IPv6 and two IPv4 attempts in parallel.
>
> Yes, doing such parallel requests would be a great c-ares extension.
> Currently, c-ares follows the way of how a list of name servers specified in resolv.conf is supposed to be used - they should be considered as a list of "preferred" servers and iterated one after the other.
>
> Unfortunately, this "preferred server list" doesn't distinguish between IPv6 and IPv4 and treats them the same way.
> So, the problems in one stack may inadvertently affect the other one via name servers from bad network stack being on top of the list.
>
> I stepped on this issue using even IPv4 IP resolve option in libCurl handles.
> Even though I needed only A-records for IPv4 name resolution, it took ~10s to start IPv4 transfers because the resolv.conf had two not functional (from network stack perspective) IPv6 name servers on top (i.e. 2001:558:feed::1).
>
> In this case, bad IPv6 connectivity affected IPv4 single stack transfers in some unexpected and tricky way.
> So, I think on a long way of doing multiple parallel requests in c-ares, option to distinguish between IPv4 and IPv6 and run at least IPv6 and IPv4 queries in parallel will be a very welcomed addition.
>
> Otherwise, it is very difficult to handle such tricky dual-stack related issue in curl/c-ares clients.
>
> Thanks,
> Dmitry Karpov
>
>
> -----Original Message-----
> From: Daniel Stenberg <daniel_at_haxx.se>
> Sent: Thursday, November 25, 2021 11:01 PM
> To: Dmitry Karpov <dkarpov_at_roku.com>
> Subject: RE: Happy Eyeballs doesn't seem to work with c-ares when IPv6
> name servers on top of the name server list don't respond
>
> On Thu, 25 Nov 2021, Dmitry Karpov wrote:
>
>> C-ares, internally, understands both IPv4 and IPv6 families but it
>> sends queries using the order in the name server list regardless of
>> whether it is
>> IPv4 or IPv6 address. In other words, c-ares doesn't seem to
>> implement any mechanism like Happy Eyeball in connection
>> establishment, trying
>> 2001:558:feed::1 (as "IPv6 Primary server") and 75.75.75.75 (as
>> "IPv4 Primary server") in parallel for each query type.
>
> I argue that it should, yes, but also that it should probably send the query to several (up to N) resolvers at once, so maybe do two IPv6 and two IPv4 attempts in parallel.
>
>
Date: Fri, 10 Dec 2021 22:22:40 +0000
Hi Daniel,
I have a question related to the discussion below.
While parallel query requests in c-ares are not there yet, I am wondering if it is possible to control DNS timeout or expose such control via curl handle options in the future?
I noticed that libcurl always uses the default c-ares timeout value,
which is 5s, and happy eyeball DNS timeout also relies on that value by hard coding it in the HAPPY_EYEBALLS_DNS_TIMEOUT constant, but in some cases, 5s may be a too long wait before switching to a different name server.
I also noticed that the older c-ares versions used some random values and the actual timeouts were less 5s, which allowed to skip over bad name servers faster than the latest c-ares versions, which have precise 5s.
Because currently there is no such option on the surface, and probably modifying libcurl and c-ares code and changing the default values is the only option to have DNS timeouts less than 5s, I am wondering, if you think that doing such changes will be a very risky change even as a short-term solution?
Thanks,
Dmitry Karpov
>
> -----Original Message-----
> From: Dmitry Karpov
> Sent: Monday, November 29, 2021 12:12 PM
> To: Daniel Stenberg <daniel_at_haxx.se>
> Subject: RE: Happy Eyeballs doesn't seem to work with c-ares when IPv6
> name servers on top of the name server list don't respond
>
>> I argue that it should, yes, but also that it should probably send the query to several (up to N) resolvers at once, so maybe do two IPv6 and two IPv4 attempts in parallel.
>
> Yes, doing such parallel requests would be a great c-ares extension.
> Currently, c-ares follows the way of how a list of name servers specified in resolv.conf is supposed to be used - they should be considered as a list of "preferred" servers and iterated one after the other.
>
> Unfortunately, this "preferred server list" doesn't distinguish between IPv6 and IPv4 and treats them the same way.
> So, the problems in one stack may inadvertently affect the other one via name servers from bad network stack being on top of the list.
>
> I stepped on this issue using even IPv4 IP resolve option in libCurl handles.
> Even though I needed only A-records for IPv4 name resolution, it took ~10s to start IPv4 transfers because the resolv.conf had two not functional (from network stack perspective) IPv6 name servers on top (i.e. 2001:558:feed::1).
>
> In this case, bad IPv6 connectivity affected IPv4 single stack transfers in some unexpected and tricky way.
> So, I think on a long way of doing multiple parallel requests in c-ares, option to distinguish between IPv4 and IPv6 and run at least IPv6 and IPv4 queries in parallel will be a very welcomed addition.
>
> Otherwise, it is very difficult to handle such tricky dual-stack related issue in curl/c-ares clients.
>
> Thanks,
> Dmitry Karpov
>
>
> -----Original Message-----
> From: Daniel Stenberg <daniel_at_haxx.se>
> Sent: Thursday, November 25, 2021 11:01 PM
> To: Dmitry Karpov <dkarpov_at_roku.com>
> Subject: RE: Happy Eyeballs doesn't seem to work with c-ares when IPv6
> name servers on top of the name server list don't respond
>
> On Thu, 25 Nov 2021, Dmitry Karpov wrote:
>
>> C-ares, internally, understands both IPv4 and IPv6 families but it
>> sends queries using the order in the name server list regardless of
>> whether it is
>> IPv4 or IPv6 address. In other words, c-ares doesn't seem to
>> implement any mechanism like Happy Eyeball in connection
>> establishment, trying
>> 2001:558:feed::1 (as "IPv6 Primary server") and 75.75.75.75 (as
>> "IPv4 Primary server") in parallel for each query type.
>
> I argue that it should, yes, but also that it should probably send the query to several (up to N) resolvers at once, so maybe do two IPv6 and two IPv4 attempts in parallel.
>
>
-- / daniel.haxx.se -- Unsubscribe: https://lists.haxx.se/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.htmlReceived on 2021-12-10