OAuth signature methods

I’m currently looking into web services and different methods of request authentication, especially as what I’m aiming to end up with is something inherently RESTful as this method will provide me with the best flexibility when designing a frontend to the service and generally, the arguments of the REST crowd seem to convince me (works like the human readable web, inherently scalable, enforces clean structure of resources and finally: easy to program against due to “obvious” API).

As different services are going to communicate with themselves, sometimes acting as users of their respective platforms and because I’m not really inclined to pass credentials around (or make the user do one half of the tasks on one site and the other half on another site), I was looking into different methods of authentication and authorization which work in a RESTful enviroment and work without passing around user credentials.

The first thing I did was to note the requirements and subsequently, I quickly designed something using public key cryptography which would have worked quite nicely (possibly – I’m no expert in this field – yet).

Then I learned about OAuth which was designed precisely to solve my issues.

Eager, I read through the specification, but I was put off by one single fact: The default method for signing requests, the method that is most widely used, the method that is most widely supported, relies on a shared secret.

Even worse: The shared secret must be known in clear on both the client and the server (using the common terminology here; OAuth speaks of consumers and providers, but I’m (still) more used to the traditional naming).

This is bad on multiple levels:

  • As the secret is stored on two places (client and server), it’s twice as probable to leak out than if it’s only stored on one place (the client).
  • If the token is compromised, the attacker can act in the name of the client with no way of detection.
  • Frankly, it’s responsibility I, as a server designer, would not want to take on. If the secret is on the client and the client screws up and lets it leak, it’s their problem, if the secret is stored on the server and the server screws up, it’s my problem and I have to take responsibility.
    Personally, I’m quite confident that I would not leak secret tokens, but can I be sure? Maybe. Do I even want to think about this? Certainly not if there is another option.
  • If, god forbid, the whole table containing all the shared secrets is compromised, I’m really, utterly screwed as the attacker can use all services, impersonating any user at will.
  • As the server needs to know all shared secrets, the risk of losing all of them is only even created. If only the client knows the secret, an attacker has to compromise each client individually. If the server knows the secret, it suffices to compromise the server to get all clients.
  • As per the point above, the server gets to be a really interesting target for attacks and thus needs to be extra secured and even needs to take measures against all kinds of more-or-less intelligent attacks (usually ending up DoSing the server or worse).

In the end, HMAC-SHA1 is just repeating history. At first, we stored passwords in the clear, then we’ve learned to hash them, then we even salted them and now we’re exchanging them for tokens stored in the clear.

No.

What I need is something that keeps the secret on the client.

The secret should never ever need to be transmitted to the server. The server should have no knowledge at all of the secret.

Thankfully, OAuth contains a solution for this problem: RSA-SHA1 as defined in section 9.3 of the specification. Unfortunately, it leaves a lot to be desired though. Whereas the rest of the specification is a pleasure to read and very, well, specific, 9.3 contains the following phrase:

It is assumed that the Consumer has provided its RSA public key in a verified way to the Service Provider, in a manner which is beyond the scope of this specification.

Sure. Just specify the (IMHO) useless way using shared secrets and leave out the interesting and IMHO only functional method.

Sure. Transmitting a Public Key is a piece of cake (it’s public after all), but this puts another burden on the writer of the provider documentation and as it’s unspecified, implementors will be forced to amend the existing libraries with custom code to transmit the key.

Also I’m unclear on header size limitations. As the server needs to know what public key was used for signature (oauth_consumer_key), it must be sent on each requests. While manually generated public token can be small, a public key certainly isn’t. Is there a size-limit for HTTP-headers? I’ll have to check that.

I could just transmit the key ID (the key is known on the server) or the key fingerprint as the consumer key, but is that following the standard? I didn’t see this documented anywhere and examples in the wild are very scarcely implemented.

Well… as usual, the better solution just requires more work and I can live with that, especially considering as, for now, I’ll be the person to write both server and client, but I feel the upcoming pain, should third party consumers decide to hook up with that provider.

If you ask me what I would have done in the footsteps of the OAuth guys, I would only have specified RSA-SHA1 (and maybe PLAINTEXT) and not even bothered with HMAC-SHA1. And I would have specified a standard way for public key exchange between consumer and provider.

Now the train has left and everyone interested in creating a really secure (and convenient – at least for the provider) solution will be left with more work and not standardized methods.

… and back to Thunderbird

It has been a while since I’ve last posted about email – still a topic very close to my heart, be it on the server side or on the client side (though the server side generally works very well here, which is why I don’t post about it).

Waybackwhen, I’ve written about Becky! which is also where I’ve declared the points I deemed important in a mail client. A bit later, I’ve talked about The Bat!, but in the end, I’ve settled with Thunderbird, just to switch to Mac Mail when I’ve switched to the Mac.

After that came my excursion to Gmail, but now I’m back to Thunderbird again.

Why? After all, my Gmail review sounded very nice, didn’t it?

Well…

  • Gmail is blazingly fast once it’s loaded, but starting the browser and then gmail (it loads so slow that “starting (the) gmail (application)” is a valid term to use) is always slower than just keeping a mail client open on the desktop.
  • Google Calendar Sync sucks and we’re using Exchange/Outlook here (and are actually quite happy with it – for calendaring and address books – it sucks for mail, but it provides decent IMAP support), so there was no way for the other folks here to have a look at my calendar.
  • Gmail always posts a “Sender:”-Header when using a custom sender domain which technically is the right thing to do, but Outlook on the receiving end screws up by showing the mail as being “From xxx@gmail.com on behalf of xxx@domain.com” which isn’t really what I’d want.
  • Google’s contact management offering is sub par compared even to Exchange.
  • iPhone works better with Exchange than it does with Google (yes. iPhone, but that’s another story).
  • The cool Gmail MIDP client doesn’t work/isn’t needed on the iPhone, but originally was one of the main reasons for me to switch to Gmail.

The one thing I really loved about Gmail though was the option of having a clean inbox by providing means for archiving messages with just a single keyboard shortcut. Using a desktop mail client without that funcationality wouldn’t have been possible for me any more.

This is why I’ve installed Nostalgy, a Thunderbird extension allowing me to assign a “Move to Folder xxx” action to a single keystroke (y in my case – just like gmail).

Using Thunderbird over Mac Mail has its reasons in the performance and in the crazy idea of Mac Mail to always download all the messages. Thunderbird is no race horse, but Mail.app isn’t even a slug.

Lately, more and more interesting posts regarding the development of Thunderbird have appeared on Planet Mozilla, so I’m looking forward to see Thunderbird 3 taking shape in its revitalized form.

I’m all but conservative in my choice of applications and gadgets, but Mail  – also because of its importance for me – must work exactly as I want it. None of the solutions out there are doing that to the full extent, but TB certainly comes closest. Even after years of testing and trying out different solutions, TB is the thing that solves most of my requirements without adding new issues.

Gmail is splendid too, but it presents some shortcomings TB doesn’t come with.

Internet at home

I’m a usually very happy customer of Cablecom. They provide internet-over-tv-cable and as here in Switzerland, basically everyone has tv cable and because they provide nice pure ip addresses (no PPPoE stuff) and because when you are not trapped in the administrative trap, then it just works. Cablecom internet is never down, very speedy and usually I’m envied for my pings in online matches of whatever game.

All these are very good reasons to become a customer of Cablecom and depite of what you are going to read here shortly, I would probably still recommend them to other users – at least those with some technical background because, quite frankly, of all the ways to get broadband here in Switzerland, this one is the one that works the easiest and the most consistent.

But once you fall into the administrative trap, all hell breaks lose.

Here’s what happened to me (also, read my other post about Cablecom’s service):

Somewhere around the end of May I got a letter telling me that I would get sent a new cable modem. Once I’ve got that, I should give them a call so they can deactivate my old one. Also, if I don’t call, they’d automatically disable the old modem after a couple of weeks.

Unfortunately, I never got that modem. I don’t know who’s to blame and I don’t care. Also, I could not have anticipated the story as it’s now unfolding because the letter clearly said that I’d get the modem at an unknown later date, so I wasn’t worried at the time.

At the beginning of June, I’ve noticed the network going down. Not used to that, especially not as it was down for a whole day, I called the hotline and told them that I suspected them of shutting of my service despite me not reciving the modem.

They’ve confirmed that and promised me to resend the modem. Re-enabling the old one was not possible they’ve told me futher on.

One week later – not having recived the modem – I’ve called again and they told me that the order was delayed due to some CRM software change at their end, but they’ve promised me to send it that week.

Another week passes. No modem. I call again and they tell me that the reporcessing of orders was delayed, but that I will get the modem that week for sure. Knowing that this probably won’t be the case, I’ve told them that I will be on vacation and that they should send it to my office address.

Another week passes and I go to vacation.

Another week passes and I call the office to ask if the modem (that was supposed to arrive two weeks ago the latest) has arrived. Of course it didn’t. What made me actually make the call was the fact that I’ve received a press release from Cablecom announcing more customers than ever – the irony of that bringing my memory back to the non-existing internet at my home.

So I called support again. They did notice that my order was late, but they had no idea why it was taking so long, there was no way of speeding it up and they had no idea when I would get the modem (keep in mind that I’m paying CHF 79/mt for not working internet access).

At this point I’ve had enough and I’ve called someone higher up I know working at Cablecom.

In the end, I was able to get internet access using that route, but it’s not entirely official and I still have not the slightest idea of when/if the problem with my actual account will ever be fixed.

Pathetic.

Still: If everything goes well, then you have nothing to fear. From a technical standpoint, Cablecom owns all other currently widely available methods for broadband internet access, so this is what I will be sticking with. Just be prepared for longer service intermissions once you fall into the administrative trap.