Buy commercial curl support. We
help you work out your issues, debug your libcurl applications, use the API,
port to new platforms, add new features and more. With a team lead by the
curl founder Daniel himself.
Re: HSTS cache cap allows eviction of security entries
- Contemporary messages sorted: [ by date ] [ by thread ] [ by subject ] [ by author ] [ by messages with attachments ]
From: Daniel Stenberg via curl-library <curl-library_at_lists.haxx.se>
Date: Wed, 1 Apr 2026 23:24:22 +0200 (CEST)
On Wed, 1 Apr 2026, Dan Fandrich via curl-library wrote:
>> I set the limit to 1000 entries, quite arbitrarily.
>
> This strikes me as a very low limit, especially since HSTS is intended as a
> security measure and having a limit at all is only there for DoS protection.
> Given that an HSTS entry doesn't take much space (maybe 100 bytes each per
> host), a limit of 1000 is only going to take 0.0001 GB of space. IMHO
> bumping it another 2 or 3 orders of magnitude is more appropriate if this is
> going to be a hard limit.
I would be okay with a (much) higher limit if we think that helps.
The question is: if we think "the attack" as described is valid, does it
really help to have the limit at 100,000?
If we can think of something triggering an application can do 1000 spurious
HTTPS transfers to different hosts to drain the list, can't it then also do
100,000 requests?
Do we really accept that as a threat we need to protect against? I'm not
convinced.
> Three orders of magnitude is only a million hosts which a crawler with a
> gigabit Internet connection could reach in a mere 30 seconds, after which it
> would start expunging entries, making HSTS useless for it.
Sure, but what kind of weirdo use case is this and why do we need to care
about it? A user who wants to talk to one million hostnames now over HTTTPS so
that they can later do it over HTTP ?
AFAIK, the normal use case for HSTS is a small set of hosts. After all, users
should just do https:// from the start and be fine with it.
Date: Wed, 1 Apr 2026 23:24:22 +0200 (CEST)
On Wed, 1 Apr 2026, Dan Fandrich via curl-library wrote:
>> I set the limit to 1000 entries, quite arbitrarily.
>
> This strikes me as a very low limit, especially since HSTS is intended as a
> security measure and having a limit at all is only there for DoS protection.
> Given that an HSTS entry doesn't take much space (maybe 100 bytes each per
> host), a limit of 1000 is only going to take 0.0001 GB of space. IMHO
> bumping it another 2 or 3 orders of magnitude is more appropriate if this is
> going to be a hard limit.
I would be okay with a (much) higher limit if we think that helps.
The question is: if we think "the attack" as described is valid, does it
really help to have the limit at 100,000?
If we can think of something triggering an application can do 1000 spurious
HTTPS transfers to different hosts to drain the list, can't it then also do
100,000 requests?
Do we really accept that as a threat we need to protect against? I'm not
convinced.
> Three orders of magnitude is only a million hosts which a crawler with a
> gigabit Internet connection could reach in a mere 30 seconds, after which it
> would start expunging entries, making HSTS useless for it.
Sure, but what kind of weirdo use case is this and why do we need to care
about it? A user who wants to talk to one million hostnames now over HTTTPS so
that they can later do it over HTTP ?
AFAIK, the normal use case for HSTS is a small set of hosts. After all, users
should just do https:// from the start and be fine with it.
-- / daniel.haxx.se || https://rock-solid.curl.dev -- Unsubscribe: https://lists.haxx.se/mailman/listinfo/curl-library Etiquette: https://curl.se/mail/etiquette.htmlReceived on 2026-04-01