Buy commercial curl support from WolfSSL. We help you work
out your issues, debug your libcurl applications, use the API, port to new
platforms, add new features and more. With a team lead by the curl founder
himself.
Re: Setting connection timeout per host?
- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]
From: Midnight Wonderer via curl-library <curl-library_at_cool.haxx.se>
Date: Sun, 10 Jan 2021 23:55:07 +0700
On Sun, 10 Jan 2021 at 20:44, Daniel Stenberg <daniel_at_haxx.se> wrote:
> I would rather we focued on what exactly the problem and use case you have
> with the existing algoritm is that makes you not able to just perhaps extend
> the timeout a little?
Good point!
To add some background:
As we all know, if you can't control the BGP table yourself (for
anycast networking),
the only reliable way to achieve the failover in real-time and
eliminate a single point of failure is "client retry".
This is potentially affecting most HTTP-based services in existence.
But client retry is not practical, mostly because there is no standard for it.
Some clients took, like, a minute to try the next IP address. Take
curl as an example, which goes the way it is.
We are a small API vendor with no control over BGP, but multi-cloud
deployment is within reach.
Depends on the facilities, some DDoS mitigation appliances take a few
minutes to kick in; that's a few minutes of downtime.
And DNS records controlling suffer from the TTL-based caching.
We can't compete with bigger firms in a mission-critical project that
expect no downtime (or statistically close to 100% uptime) because we
have no means to support that.
We can acquire the expected maximum ping time to our servers (from any
location on earth that our client possibly place their servers on),
which may be, says 800ms.
And if we know in advance that our clients use libcurl, directly or
indirectly, we can recommend a connection timeout to them by working
out the math.
Says, if we want 3 tries of our servers, based on the current behavior
of curl, we need 6.4s as the connection timeout to achieve our control
parameters.
However, even that, in the worst-case scenario, the timeout wasted on
the first 2 connection attempts can be as high as 4.8s before reaching
the working server.
For API, 4.8s vs. 1.6s delay can be a deal-breaker. This is such a
waste of the data acquired; we can make it 3 times faster.
And since a lot of people are relying on curl, I am trying to get
support from you.
If the feature is there, I can tell our client to utilize it for a
better chance of serviceability.
I know full well that I'll get nothing now by bringing this up; it
takes some time to propagate the required features.
I just believe this feature can impact the future in a positive way.
And in the world of API, I don't think only 0.1% of users would care
about it if we count an API consumer as a user too.
-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette: https://curl.se/mail/etiquette.html
Received on 2021-01-10
Date: Sun, 10 Jan 2021 23:55:07 +0700
On Sun, 10 Jan 2021 at 20:44, Daniel Stenberg <daniel_at_haxx.se> wrote:
> I would rather we focued on what exactly the problem and use case you have
> with the existing algoritm is that makes you not able to just perhaps extend
> the timeout a little?
Good point!
To add some background:
As we all know, if you can't control the BGP table yourself (for
anycast networking),
the only reliable way to achieve the failover in real-time and
eliminate a single point of failure is "client retry".
This is potentially affecting most HTTP-based services in existence.
But client retry is not practical, mostly because there is no standard for it.
Some clients took, like, a minute to try the next IP address. Take
curl as an example, which goes the way it is.
We are a small API vendor with no control over BGP, but multi-cloud
deployment is within reach.
Depends on the facilities, some DDoS mitigation appliances take a few
minutes to kick in; that's a few minutes of downtime.
And DNS records controlling suffer from the TTL-based caching.
We can't compete with bigger firms in a mission-critical project that
expect no downtime (or statistically close to 100% uptime) because we
have no means to support that.
We can acquire the expected maximum ping time to our servers (from any
location on earth that our client possibly place their servers on),
which may be, says 800ms.
And if we know in advance that our clients use libcurl, directly or
indirectly, we can recommend a connection timeout to them by working
out the math.
Says, if we want 3 tries of our servers, based on the current behavior
of curl, we need 6.4s as the connection timeout to achieve our control
parameters.
However, even that, in the worst-case scenario, the timeout wasted on
the first 2 connection attempts can be as high as 4.8s before reaching
the working server.
For API, 4.8s vs. 1.6s delay can be a deal-breaker. This is such a
waste of the data acquired; we can make it 3 times faster.
And since a lot of people are relying on curl, I am trying to get
support from you.
If the feature is there, I can tell our client to utilize it for a
better chance of serviceability.
I know full well that I'll get nothing now by bringing this up; it
takes some time to propagate the required features.
I just believe this feature can impact the future in a positive way.
And in the world of API, I don't think only 0.1% of users would care
about it if we count an API consumer as a user too.
-------------------------------------------------------------------
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette: https://curl.se/mail/etiquette.html
Received on 2021-01-10