curl / Mailing Lists / curl-library / Single Mail
Buy commercial curl support from WolfSSL. We help you work out your issues, debug your libcurl applications, use the API, port to new platforms, add new features and more. With a team lead by the curl founder himself.

Re: git clone with basic auth in url directly returns authentication failure after 401 received under some git versions (fwd)

From: Timothe Litt <litt_at_acm.org>
Date: Sat, 20 Aug 2022 20:51:49 -0400

On 20-Aug-22 18:49, Daniel Stenberg via curl-library wrote:
> FYI,
>
> Here's an interesting side-effect that the git team ran into as a
> direct result of our fixing of CVE-2022-27774. 
>
> I responded on the git list:
> https://marc.info/?l=git&m=166103499714595&w=2

I'm not convinced of the premise of the CVE.  If a malicious actor can
generate a redirect, he can capture any credentials provided in the
request at that time.  If there are none, the redirect causes the client
to interact with the target, not the redirect generator.  If the actor
controls the target, it can prompt for and capture credentials.  But
changing protocol only helps him to the extent that it somehow makes
credentials more visible.  E.g. switching from TLS to an unencrypted
connection might allow a packet capture.  But he could just as easily
save the credentials on his side of a TLS connection to the target that
he putatively controls.  And curl verified the hostname when it started
TLS...

The real problem would seem to be that the actor can generate a
redirect.  That can happen if the server that the client intends to talk
to is compromised.  Or if some other server responds.  Restricting
redirects doesn't really help either case.

A compromised server (including a MITM or a malicious webmaster) is a
problem that the client can't solve.

If the authority (hostname) is unchanged, it's the same security domain. 

Getting another server to respond is possible - DNS poisoning and
routing corruption come to mind.  Neither is trivial.

There are safeguards:

Even if the hostname is unchanged, I think that redirecting to an
unrelated protocol should probably clear auth - e.g. https? to ftp -
there's no reason to assume that the credentials are the same, and if
they are it's probably undesirable password reuse.  I'd say
"definitely", but this would effectively be a client imposing a security
(password) policy on the server.  Not clear everyone would agree that
this is acceptable.

DNS will resolve a hostname the same way every time (modulo some
transient states).  And curl caches name resolutions, so it would
require unusual circumstances for a redirect to go to a different host
given the same name.  Sites who care (and that should be everyone though
it's not) should be using DNSSEC to deal with DNS cache poisoning.  So
if the authority is the same, the request is going to the same server. 
And a malicious server would have captured the credentials when it got
the first request.  So I don't see how propagating credentials to the
same hostname is an issue.

But there's a second line of defense - redirecting to https means
there's a TLS connection, and for that to succeed, the hostname has to
appear in a certificate whose root authority is trusted.  If that's
broken, there are bigger problems.

The rarer (but existing) case of redirecting https to http is also
protected by the TLS connection (the https server is validated); however
unwise, it's the server's responsibility to decide what to do if
credentials are offered.

We can also have a chain of redirects...but that reduces to the same
issues stepwise.

I think it would be safe to allow credential reuse on redirect if the
authority is the same.  A somewhat more conservative rule would also
require that the protocol doesn't change or is related (e.g. http<->https).

The port number is somewhat interesting.  For it to be significant, the
malicious actor would have to get control of a port that the initiator
trusts in order to send the redirect - sending credentials to an
untrusted service is on the initiator.  Again, if the actor controls the
redirect, he gets the credentials then, so clearing them in curl doesn't
help.  In any case, there's nothing magic about port numbers.  People
run https? on non-standard ports.  And run non-https? protocols on ports
80/443. 

So I don't think that including the port number in a constraint rule
will be productive.

It's also not productive to include the other parts of a URL in a rule -
e.g. I have conditional redirects from http://foo.example.net/bar(*) to
https:foo.example.net/debug/bar$1.

Comparing the authority doesn't solve all cases - e.g.
http://foo.example.net(*) to https://www.example.net/foo$1 is not
uncommon.  But comparing the authority of a redirect does handle many cases.

A simpler rule would allow redirects from http to https and from https
to http without looking at the authority.  Assuming that you believe in
DNS, it's not clear that you lose anything.  Curl can't defend against
compromised servers.  If it connects to the intended server, it did its
job even if that server sends a malicious redirect. 

Finally, there's the case of typosquatting.  But again, if a malicious
server is generating the redirect, it already has (or can get) the
credentials...

Bottom line: it's not clear that the CVE presents a real problem, or
that restricting redirects in curl (or any client) has anything more
than a cosmetic positive effect.  The client needs to connect to a
host/service that it trusts.  Creating/maintaining that trust is a
server issue, not a client problem.  Restricting redirects can have
adverse consequences, but some restrictions seem safer than others.


Timothe Litt
ACM Distinguished Engineer
--------------------------
This communication may not represent the ACM or my employer's views,
if any, on the matters discussed.




-- 
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette:   https://curl.se/mail/etiquette.html
Received on 2022-08-21