SNI progressive enhancment

Today marks another big milestone in the availabilibty of ubuquitous SSL encryption: The «Let’s Encrypt» project got their cross-signature, so come a few more weeks, they will be ready for the public to use.

However, with an unlimited amount of available free SSL certificates, we get another problem: Because back in the day nobody thought about name based virtual hosting, the initial implementation of SSL didn’t support the client telling the server what host it’s trying to connect to. This means that the server didn’t know what certificate to present when multiple host names were to be used for the same address.

This meant that for every site you wanted to offer over SSL, you needed an IP address, which are harder to get as time moves on and we’re running out of them.

«SNI» is a protocol extension that allows the client to tell the server the host-name it’s connecting to, so the server can chose the correct certificate to serve. This fixes above issue and finally allows virtual hosting based on the host name even over SSL.

Unfortunately, SNI isn’t as widely supported as we’d like: Older Android devices and all IEs under Windows XP (which still is a sizable portion of our users) don’t support SNI.

What’s also tricky is that you don’t know a client doesn’t support SNI until it’s too late: They connect to your port 443, don’t send a host name and now the server needs to a) answer and b) send a server certificate. So unless the client accidentally hit the correct host name, the client will get a certificate mismatch and it will thus display the usual SSL error message.

This is of course not very good UX as you don’t even get to tell the user what’s wrong before they see the browser-specific error message.

However, I still want to support SSL for all my sites wherever I can. If I could have non-SNI-supporting clients on an unencrypted site and then adding encryption only if they support SNI, then encryption would become a progressive enhancement. The sites I’m dealing with aren’t that far in the «needs encryption» territory, so offering encryption only for good (read: non-outdated) browsers is a viable option, especially as I want to offer this for free for the sites I’m hosting and I only have so many IP addresses at my disposal right now.

Generally, the advice to do that is to do user agent sniffing but that’s error-prone. I’d much rather feature detect.

So after a bit of thinking, I came up with this (it requires JS though):

  • Over port 80, serve the normal site unencrypted instead of just redirecting to https.
  • On that regular site do a jsonp request for some beacon file on your site over https.
  • If that beacon loads properly, then your client is obviously SNI compliant, so redirect to the https version of your site using JS.
  • If the beacon doesn’t load, then the browser probably doesn’t support SNI, so keep them on the unencrypted page. If you want to, you can set a cookie to prevent further probing on subsequent requests.
  • On port 443, serve a HSTS header, so next time the browser visits, they’ll use HTTPS from the start.

IE8 will still show the page correctly but also show a warning that it has blocked content for your own security, so you might want to immediately redirect again (with the cookie set) in order to get rid of the warning.

Contrary to the normal immediate redirect to HTTPS, this means that the first page-view even of compliant browsers will be unencrypted, so absolutely make sure that you serve all your cookies with the secure flag. This also means that in order to get to the encrypted version of the page, you need JavaScript enabled – at least for the first time.

Maybe you can come up with some crazy hack using frames, but this method seems to be the cleanest.

when in doubt – SSL

Since 2006, as part of our product, we are offering barcode scanners
with GSM support to either send orders directly to the vendor or to
transmit products into the web frontend where you can further edit them.

Even though the devices (Windows Mobile. Crappy. In progress of
updating) do support WiFi, we really only support GSM because that means we don’t have to share the end users infrastructure.

This is a huge plus because it means that no matter how locked-down the
customer’s infrastructure, no matter how crappy the proxy, no matter the IDS in use, we’ll always be able to communicate with our server.

Until, of course, the mobile carrier most used by our customers decides
to add a “transparent” (do note the quotes) proxy to the mix.

We were quite stomped last week when we got reports of an HTTP error 408 to be reported by the mobile devices, especially because we were not seeing error 408 in our logs.

Worse, using tcpdump has clearly shown how we were getting a RST
packet from the client, sometimes before sending data, sometimes while
sending data.

Strange: Client is showing 408, server is seeing a RST from the client.
Doesnt’ make sense.

Tethering my Mac using the iPhones personal hotspot feature and a SIM
card of the mobile provider in question made it clear: No longer are we
talking directly to our server. No. What the client receives is a 408
HTML formatted error message by a proxy server.

Do note the “DELETE THIS LINE” and “your organization here” comments.
What a nice touch. Somebody was really spending alot of time getting
this up and running.

Now granted, taking 20 seconds before being able to produce a response
is a bit on the longer side, but unfortunately, some versions of the
scanner software require gzip compression and gzip compression needs to
know the full size of the body to compress, so we have to prepare the
full response (40 megs uncompressed) before being able to send anything
– that just takes a while.

But consider long-polling or server sent events – receiving a 408 after
just 20 seconds? That’s annoying, wasting resources and probably not
something you’re prepared for.

Worse, nobody was notified of this change. For 7 years, the clients
were able to connect directly to our server. Then one day it changes
and now they aren’t. No communication, no time to prepare and
certainly too strict limits in order to not affect anything (not
just us – see my remark about long polling).

The solution in the end is, like so often, to use SSL. SSL connections
are opaque to any reverse proxy. A proxy can’t decrypt the data without
the client noticing. An SSL connection can’t be inspected and an SSL
connection can’t be messed with.

Sure enough: The exact same request that fails with that 408 over HTTP
goes through nicely using HTTPS.

This trick works every time when somebody is messing with your
connection. Something f’ing up your WebSocket connection? Use SSL!
Something messing with your long-polling? Use SSL. Something
decompressing your response but not stripping off the Content-Encoding
header (yes. that happend to me once)? Use SSL. Something replacing
arbitrary numbers in your response with asterisks (yepp. happened too)?
You guessed it: Use SSL.

Of course, there are three things to keep in mind:

  1. Due to the lack of SNI in the world’s most used OS and Browser
    combination (any IE under Windows XP), every SSL site you host requires
    one dedicated IP address. Which is bad considering that we are running
    out of addresses.

  2. All of the bigger mobile carriers have their CA in the browsers
    trusted list. Aside of ethics, there is no reason what so ever for them
    to not start doing all the crap I described and just re-encrypting the
    connection, faking a certificate using their trusted ones.

  3. failing that, they still might just block SSL at some point, but as
    more and more sites are going SSL only (partially for above reasons no
    doubt), outright blocking SSL is going to be more and more unlikely to
    happen.

So. Yes. When in doubt: Use SSL. Not only does that help your users
privacy, it also fixes a ton of technical issues created by practically
non-authorized third-party messing with you.

how to accept SSL client certificates

Yesterday I was asked on twitter how you would use client certificates
on a web server in order to do user authentication.

Client certificates are very handy in a controlled environment and they
work really well to authenticate API requests. They are, however,
completely unusable for normal people.

Getting meaningful information from client side certificates is
something that’s happening as part of the SSL connection setup, so it
must be happening on whatever piece of your stack that terminates the
client’s SSL connection.

In this article I’m going to look into doing this with nginx and Apache
(both traditional frontend web servers) and in node.js which you might
be using in a setup where clients talk directly to your application.

In all cases, what you will need is a means for signing certificates in
order to ensure that only client certificates you signed get access to
your server.

In my use cases, I’m usually using openssl which comes with some
subcommands and helper script to run as a certificate authority. On the
Mac if you prefer a GUI, you can use Keychain Access which has all you
need in the “Certificate Assistant” submenu of the application menu.

Next, you will need the public key of your users. You can have them
send in a traditional CSR and sign that on the command line (use
openssl req to create the CSR, use openssl ca to sign it), or you
can have them submit an HTML form using the <keygen> tag (yes. that
exists. Read up on it on MDN
for example).

You absolutely never ever in your lifetime want the private key of
the user. Do not generate a keypair for the user. Have them generate a
key and a CSR, but never ever have them send the key to you. You only
need their CSR (which contains their public key, signed by their
private key) in order to sign their public key.

Ok. So let’s assume you got that out of your way. What you have now is
your CAs certificate (usually self-signed) and a few users which now
own certificates you have signed for them.

Now let’s make use of this (I’m assuming you know reasonably well how
to configure these web servers in general. I’m only going into the
client certificate details).

nginx

For nginx, make sure you have enabled SSL using the usual steps. In
addition to these, set ssl_client_certificate
(docs)
to the path of your CA’s certificate. nginx will only accept client
certificates that have been signed by whatever ssl_client_certificate
you have configured.

Furthermore, set ssl_verify_client
(docs)
to on. Now only requests that provide a client certificate signed by
above CA will be allowed to access your server.

When doing so, nginx will set a few additional variables for you to
use, most importantly $ssl_client_cert (full certificate),
$ssl_client_s_dn (the subject name of the client certificate),
$ssl_client_serial (the serial number your CA has issued for their
certificate) and most importantly $ssl_client_verify which you should
check for SUCCESS.

Use fastcgi_param or add_header to pass these variables through to
your application (in the case of add_header make sure that it was
really nginx who set it and not a client faking it).

I’ll talk about what you do with these variables a bit later on.

Apache

As with nginx, ensure that SSL is enabled. Then set
SSLCACertificateFile to the path to your CA’s certificate. Then set
SSLVerifyClient to require
(docs).

Apache will also set many variables for you to use in your application.
Most notably SSL_CLIENT_S_DN (the subject of the client
certificate)and SSL_CLIENT_M_SERIAL (the serial number your CA has
issued). The full certificate is in SSL_CLIENT_CERT.

node.js

If you want to handle the whole SSL stuff on your own, here’s an
example in node.js. When you call http.createServer
(docs),
pass in some options. One is requestCert which you would set to true.
The other is is ca which you should set to an array of strings in PEM
format which is your CA’s certificate.

Then you can check whether the certificate check was successful by
looking at the client.authorized property of your request object.

If you want to get more info about the certificate, use
request.connection.getPeerCertificate().

what now?

Once you have the information about the client certificate (via
fastcgi, reverse proxy headers or apache variables in your module),
then the question is what you are going to do with that information.

Generally, you’d probably couple the certificate’s subject and its
serial number with some user account and then use the subject and
serial as a key to look up the user data.

As people get new certificates issued (because they might expire), the
subject name will stay the same, but the serial number will change, so
depending on your use-case use one or both.

There are a couple of things to keep in mind though:

  • Due to a flaw in the SSL protocol which was discovered in 2009,
    you cannot safely have only parts of your site require a certificate.
    With most client libraries, this is an all-or-nothing deal. There is
    a secure renegotiation, but I don’t think it’s widely supported at
    the moment.
  • There is no notion of signing out. The clients have to present their
    certificate, so your clients will always be signed on (which might
    be a good thing for your use-case)
  • The UI in traditional browsers to handle this kind of thing is
    absolutely horrendous.
    I would recommend using this only for APIs or with managed devices
    where the client certificate can be preinstalled silently.

You do however gain a very good method for uniquely identifying
connecting clients without a lot of additional protocol overhead. The
SSL negotiation isn’t much different whether the client is presenting a
certificate or not. There’s no additional application level code
needed. Your web server can do everything that’s needed.

Also, there’s no need for you to store any sensitive information. No
more leaked passwords, no more fear of leaking passwords. You just
store whatever information you need from the certificate and make sure
they are properly signed by your CA.

As long as you don’t lose your CAs private key, you can absolutely
trust your clients and no matter how much data they get when they
break into your web server, they won’t get passwords, not the ability
to log in as any user.

Conversely though, make sure that you keep your CA private key
absolutely safe. Once you lose it, you will have to invalidate all
client certificates and your users will have to go through the process
of generating new CSRs, sending them to you and so on. Terribly
inconvenient.

In the same vein: Don’t have your CA certificate expire too soon. If it
does expire, you’ll have the same issue at hand as if you lost your
private key. Very annoying. I learned that the hard way back in
2001ish and that was only for internal use.

If you need to revoke a users access, either blacklist his serial
number in your application or, much better, set up a proper CRL for
your certificate authority and have your web server check that.

So. Client certificates can be useful tool in some situations. It’s
your job to know when, but at least now you have some hints to get you
going.

Me personally, I was using this once around 2009ish for a REST
API, but I have since replaced that with oAuth because that’s what most
of the users knew best (read: “at all”). Depending on the audience,
client certificates might be totally foreign to them.

But if it works for you, perfect.

Epic SSL fail

Today when I tried to use the fancy SSL VPN access a customer provided me with, I came across this epic fail:

Of all the things that can be wrong in a SSL certificate, this certificate manages to get them wrong. The self-signed(1) certificate was issued for the wrong host name(2) and it has expired(3) quite some time ago.

Granted: In this case the issue of trust is more or less constrained to the server to know who I am (I wasn’t intending on transferring any amount of sensitive data), but still – when you self-sign your certificate, the cost of issuing one for the correct host or issuing one with a very long validity becomes a non-issue.

Anyways – I had a laugh. And now you can have one too.

Why is nobody using SSL client certificates?

Did you know that ever since the days of Netscape Navigator 3.0, there is a technology that allows you to

  • securely sign on without using passwords
  • allow for non-annoying two-factor authentication
  • uniquely identify yourself to third-party websites without giving the second party any account information

All of this can be done using SSL client certificates.

You know: Whenever you visit an SSL protected page, what usually happens is that your browser checks the identity of the remote site by checking their certificate. But what also could happen is that the remote site could check your identity using a previously issued certificate.

This is called SSL client side certificate.

Sites can make the browser generate a keypair for you. Then they’ll sign your public key using their private key and they’ll be able to securely identify you from then on.

The certificate is stored in the browser itself and your browser will send it to any (SSL protected) site requesting it. The site in turn could then identify you as the owner of the private key associated to the presented certificate (provided the key wasn’t generated on a pre-patch Debian installation *sigh*).

The keypair is bound to the machine it was generated on, though it can be exported and re-imported on a different machine.

It solves our introductory three problems like this:

  • by presenting the certificate, the origin server can identify you. No need to enter a user name or a password.
  • By asking for a password (something you know) and comparing the SSL certificate (something you have), you get cheap and easy two factor authentication that’s a lot more secure than asking for your mothers maiden name.
  • If the requesting party in a three-site scenario knows your public key and uses that to request information from a requested party, you, can revoke access by this key at any time without any of the parties knowing your username and password.

Looks very nice, doesn’t it?

So why isn’t it used more often (read: at all)?

This is why:

Picture underlining the

The screenshot shows what’s needed to actually have a look at the client side certificates installed in your browser, which currently is the only way of accessing them. Let’s say you want to copy a keypair from one machine to another. You’ll have to:

  1. Open the preferences (many people are afraid of even that)
  2. Select Advanced (scary)
  3. Click Encryption (encry… what?)
  4. Click “View Certificates” (what do the other buttons do? oops! Another dialog?)
  5. Select your certificate (which one?) and click “Export” (huh?)

Even generation of the key is done in-browser without feedback by the site requesting the key.

This is like basic authentication (nobody uses this one) vs. forms based authentication (which is what everybody uses): It’s non-themeable, scary, modal and complicated.

What we need for client side certificates to become useful is a way for sites to get more access to the functionality than they currently do: They need information on the key generation process. They should allow the user to export the key and to re-import it (just spawning two file dialogs should suffice – of course the key must not be transmitted to the site in the process). They need a way to list the keys installed in a browser. They need to be able to add and remove keys (on the user’s request).

In the current state, this excellent idea is rendered completely useless by the awful usability and the completely detached nature: This is a browser feature. It’s browser dependent without a way for the sites to control it – to guide users through steps.

For this to work, sites need more control.

Without giving them access to your keys.

<divpInteresting problem. Isn’t it?</p>