curl-library
Talking HTTPS to proxies (fwd)
Date: Thu, 14 Apr 2011 23:11:03 +0200 (CEST)
Hi friends,
I just wanted to make everyone aware that apparently there's a push from some
users to allowing connecting to proxies using HTTPS as can be read in an
elaborate way below, taken from the httpbis mailing list. (Originally archived
here: http://lists.w3.org/Archives/Public/ietf-http-wg/2011AprJun/0081.html)
Apparently chrome for example already supports this, as William Chan replied
in http://lists.w3.org/Archives/Public/ietf-http-wg/2011AprJun/0082.html
Anyone feel like working on making libcurl support this? It really shouldn't
be that hard to add.
---------- Forwarded message ----------
Date: Thu, 14 Apr 2011 21:04:40
From: Willy Tarreau
To: httpbis mailing list
Subject: Talking HTTPS to proxies
Hello,
I'm regularly encountering what I would call dirty and unreliable
hacks to provide proxy authentication in enterprises. And with the
rise of external proxy services, it's going to get even worse.
A common issue is that in many enterprises, a password must never pass
in clear text over the network. So byebye Basic Auth. Digest requires
that a database of cleartext passwords exists, which is most commonly
refused too. Some proxies support NTLM auth in MS environments, but it
is not the case for all of them, and some proxies simply cannot access
such a service from where they're placed.
Due to this, we're commonly seeing cookie-based authentication methods
which rely on redirects and which are not much reliable if we put the
efficiency aside.
The overall principle is approximately the following (I'm saying approx
because I've seen several variants depending on the will to use popups
or forms, and the trade between security and comfort) :
1) the browser tries to access example.com through the proxy
2) the proxy wants user to authenticate and redirects it to a host under
the proxy's responsibility over https.
3) the browser follows the redirect through the proxy and gets either an
auth form or a 401
4) the user enters his credentials and validates. The request still contains
a query string with all the info about the original URL at example.com
5) the proxy accepts them, and issues a redirect to a fake host under
example.com, with the request still encoded in the query string along
with a token. It also emits a cookie for the authentication host.
6) the browser follows the redirect and requests the fake host over HTTP
7) the proxy intercepts the request, returns a redirect to the initial URL
with a set-cookie header so that as long as the user remains on the same
site it will present this cookie.
8) the browser follows that redirect and finally goes to example.com with
the cookie.
9) when the user goes to another site, steps 1 and 2 apply, the proxy
sees the cookie that was delivered at previous step 5 and is able to
directly jump there.
Overall, those are a lot of redirects, in order to safely authenticate a
user over the network. Due to this, I've seen some setups where the
credentials are assigned to the client's IP only. That way once the user is
authenticated, everybody can access the proxy under his credentials just
by being relayed by his PC. This is a common trick in big enterprises.
Another workaround consists in authenticating the connection regardless
of any request in it. Some clients can then share the same connection by
inserting a proxy in the middle and all browse over the same connection
(I already encountered this case too).
And the cherry at the top of the cake is that this doesn't work well.
Some sites make use of flash which does not send the cookies (so the
proxy vendors use other tricks for that), and when the users's cookie
for the authentication host expires, you see lots of funny things. If
it expires when loading an image, you often never get it and don't see
the auth form either. Also, XHR and POSTs don't work well either : POSTs
to a site not covered by the current cookie will have its contents lost,
and both XHR and POSTs will be lost when the auth cookie expires (very
annoying in webmails where you know that all your mail's contents are
lost when you see the popup after clicking "send").
What I've realized is that all those horrors only exist because browsers
offer no provisions for connecting to proxies over HTTPS instead of HTTP.
It would be amazingly simple. We'd just have to check the box "use HTTPS"
in the browser's config, retrieve the proxy's certificate and everything
could safely be exchanged with the proxy. Even basic auth would be easy
to use and safe. We could also make use of client certificates with this.
So what I'm wondering now is why we have not seen this yet. Is it because
nobody has brought the issue yet, because the vendors who implement the
horrors I described above are too happy to be ahead of competition when
it comes to deploying safe authentication methods, because there are
major drawbacks in doing this, or because I'm stupid and have never
found how to enable this ?
I'm sure that some proxies do probably already support it as a side effect
of being used as SSL reverse-proxies. We only need browsers to add the
checkbox in their proxy config to enable this. I have heard about some
sites where an stunnel-like component is installed on the user's PC (either
as a java applet or as a real daemon) and simply transforms the cleartext
HTTP traffic into HTTPS to connect to the proxy. (I did not see those myself,
I only saw applets to use stronger crypto than what the browser offers, but
they were not deployed as explicit proxies).
Shouldn't we try to encourage both browser vendors and proxy vendors to enable
HTTPS ?
Thanks for any insight,
Willy
-------------------------------------------------------------------
List admin: http://cool.haxx.se/list/listinfo/curl-library
Etiquette: http://curl.haxx.se/mail/etiquette.html
Received on 2011-04-14