Buy commercial curl support. We
help you work out your issues, debug your libcurl applications, use the API,
port to new platforms, add new features and more. With a team lead by the
curl founder Daniel himself.
Re: curl_multi_socket_action takes longer when using HTTPS
- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]
From: Daniel Stenberg via curl-library <curl-library_at_lists.haxx.se>
Date: Wed, 29 May 2024 14:33:04 +0200 (CEST)
On Tue, 28 May 2024, Richa Shah wrote:
> - Is it possible the time spent in pruning dead connections is
> getting counted towards DNS resolution? Since thats the very
> first latency
> that curl tracks for handles, my guess is anything curl does before
> actually reaching the resolution part of the request gets counted within
> the "name lookup" time.
Yes, I think that might be correct.
> - The "prune_dead_connections" method is expected to be called at
> most once per second, and I do see that happening with my
> service as well.
> But given the high traffic I'm submitting to curl (~1000 or more HTTPS
> requests per second, all going to different URLs/IP addresses), we end up
> accumulating a lot of dead connections over time, and pruning
> ends up being
> a cost we have to pay before we can get to a request almost every second.
> Is it possible to somehow proactively prune dead connections?
This is just code so yes, everything is certainly possible. I don't have work
on this on my personal agenda though and I have not heard anyone else say they
work on it.
A first test-shot for you would perhaps be to increase the scan frequency from
once per second to maybe ten per second?
> Or can we prune in smaller batches so it doesn't end up adding too much time
> for an actual request?
If we can think of a better way to do it, then I think we should do it in a
better way! ;-)
A challenge with such a scheme is that it risks that the amount of actually
dead connections gradually increases because it closes them off too slowly.
> I also noticed that closing an SSL connection is
> expensive
> because there are some more calls that curl makes to the TLS
> library before
> it actually closes the connection, so closing 100+ connections at a time
> serially will add up to quite a bit of time.
The core problem here is really that getting 100+ connections that suddenly
are dead is an edge case libcurl is not optimized for. Typically, connections
die more spread out over time.
Date: Wed, 29 May 2024 14:33:04 +0200 (CEST)
On Tue, 28 May 2024, Richa Shah wrote:
> - Is it possible the time spent in pruning dead connections is
> getting counted towards DNS resolution? Since thats the very
> first latency
> that curl tracks for handles, my guess is anything curl does before
> actually reaching the resolution part of the request gets counted within
> the "name lookup" time.
Yes, I think that might be correct.
> - The "prune_dead_connections" method is expected to be called at
> most once per second, and I do see that happening with my
> service as well.
> But given the high traffic I'm submitting to curl (~1000 or more HTTPS
> requests per second, all going to different URLs/IP addresses), we end up
> accumulating a lot of dead connections over time, and pruning
> ends up being
> a cost we have to pay before we can get to a request almost every second.
> Is it possible to somehow proactively prune dead connections?
This is just code so yes, everything is certainly possible. I don't have work
on this on my personal agenda though and I have not heard anyone else say they
work on it.
A first test-shot for you would perhaps be to increase the scan frequency from
once per second to maybe ten per second?
> Or can we prune in smaller batches so it doesn't end up adding too much time
> for an actual request?
If we can think of a better way to do it, then I think we should do it in a
better way! ;-)
A challenge with such a scheme is that it risks that the amount of actually
dead connections gradually increases because it closes them off too slowly.
> I also noticed that closing an SSL connection is
> expensive
> because there are some more calls that curl makes to the TLS
> library before
> it actually closes the connection, so closing 100+ connections at a time
> serially will add up to quite a bit of time.
The core problem here is really that getting 100+ connections that suddenly
are dead is an edge case libcurl is not optimized for. Typically, connections
die more spread out over time.
-- / daniel.haxx.se | Commercial curl support up to 24x7 is available! | Private help, bug fixes, support, ports, new features | https://curl.se/support.html -- Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library Etiquette: https://curl.se/mail/etiquette.htmlReceived on 2024-05-29