Web service authentication

When reading an article about how to make google reader work with authenticated feeds, one big flaw behind all those web 2.0 services sprang to my mind: Authentication.

I know that there are efforts underway to standardise on a common method of service authentication, but we are nowhere near there yet.

Take facebook: They offer you to enter your email account data into some form to send an invitation to all your friends. Or the article I was referring to: They want your account data for a authenticated feed to make them available in google reader.

But think of what you are giving away…

For your service provider to be able to interact with that other service, they need to store your passwort. Be it short term (facebook, hopefully) or long term (any online feed reader with authentication support). They can (and do) assure you that they will store the data in encrypted form, but to be able to access the service in the end, they need the unencrypted password, thus requiring them to not only use reversible encryption, but to also keep the encryption key around.

Do you want a company in a country whose laws you are not familiar with to have access to all your account data? Do you want to give them the password to your personal email account? Or to everything else in case you share passwords?

People don’t seem to get this problem as account data is freely given all over the place.

Efforts like OAuth are clearly needed, but as webbased technology, they clearly can’t solve all the problems (what about Email accounts for example).

But is this the right way? We can’t even trust desktop applications. Personally, I think the good old username/password combination is at the end of its usefulness (was it ever really useful?). We need new, better, ways for proving our identity. Something that is easily passed around and yet cannot be copied.

SSL client certificates feel like an underused but very interesting option. Let’s make two examples. The first one is your authenticated feed. The second one is your SSL-enabled email server. Let’s say that you want to give a web service revokable access to both services without ever giving away personal information.

For the authenticated feed, the external service will present the feed server with its client side certificate which you have signed. By checking your signature, the authenticated feed knows your identity and by checking your CRL it knows whether you authorized the access or not. The service doesn’t know your password and can’t use your signature for anything but accessing that feed.

The same goes for the email server: The third party service logs in with your username and the signed client certificate (signed by you), but without password. The service doesn’t need to know your password and in case they do something wrong, you revoke your signature and be done with it (I’m not sure whether mail servers support client certificates, but I gather they do as it’s part of the SSL spec).

Client side certificates already provide a standard means for secure authentication without ever passing a known secret around. Why isn’t it used way more often these days?

A rant on brace placement

Many people consider it to be good coding style to have braces (in language that use them for block boundaries) on their own line. Like so:

function doSomething($param1, $param2)
{
    echo "param1: $param1 / param2: $param2";
}

Their argument usually is that it clearly shows the block boundaries, thus increasing the readability. I, as a proponent of placing bracers at the end of the statement opening the block, strongly disagree. I would format above code like so:

function doSomething($param1, $param2){
    echo "param1: $param1 / param2: $param2";
}

Here is why I prefer this:

  • In many languages code blocks don’t have their own identity – functions have, but not blocks (they don’t provide scope). Placing the opening brace on its own line, you emphasize the block but you actually make it harder to see what caused the block in the first place.
  • Using correct indentation, the presence of the block should be obvious anyways. There is no need to emphasize it more (at the cost of readability of the block opening statement).
  • I doubt that using one line per token really makes the code more readable. Heck… why don’t we write that sample code like so?
function
doSomething
(
$param1,
$param2
)
{
    echo "param1: $param1 / param2: $param2";
}

The IE rendering dilemma – solved?

A couple of months a IE rendering dilemma: How to fix IE8’s rendering engine without breaking all the corporate intranets out there? How to create both a standards oriented browser and still ensure that the main customers of Microsoft – the enterprises – can still run a current browser without having to redo all their (mostly internal) web applications.

Only three days after my posting IEBlog talked about IE8 passing the ACID2 test. And when you watch the video linked there, you’ll notice that they indeed kept the IE7 engine untouched and added an additional switch to force IE8 into using the new rendering engine.

And yesterday, A List Apart showed us how it’s going to work.

While I completely understand Microsofts solution and the reasoning behind it, I can’t see any other browser doing what Microsoft recommended as a new standard. The idea to keep multiple rendering engines in the browser and default to outdated ones is in my opinion a bad idea. Download-Sizes of browser increase by much, security problems in browsers must be patched multiple times, and, as the Webkit blog put it, “[..] hurts the hackability of the code [..]”.

As long as the other browser vendors don’t have IE’s market share nor the big company intranets depending on these browsers, I don’t see any reason at all for the other browsers to adapt IE’s model.

Also, when I’m doing (X)HTML/CSS work, usually it works and displays correctly in every browser out there – with the exception of IE’s current engine. As long as browsers don’t have awful bugs all over the place and you are not forced to hack around them, deviating from the standard in the process, there is no way a page you create will only work in one specific version of a browser. Even more so: When it breaks on a future version, that’s a bug in the browser that must be fixed there.

Assuming that Microsoft will, finally, get it right with IE8 and subsequent browser versions, we web developers should be fine with

<meta http-equiv="X-UA-Compatible" content="IE=edge" />

on every page we output to a browser. These compatibility hacks are for people that don’t know what they are doing. We know. We follow standards. And if IE begins to do so as well, we are fine with using the latest version of the rendering engine there is.

If IE doesn’t play well and we need to apply braindead hacks that break when a new version of IE comes out, then we’ll all be glad that we have this method of forcing IE to use a particular engine, thus making sure that our hacks continue to work.

Apple TV – Second try

When Apple announced their AppleTV a couple of months (or was it years?) ago, I was very skeptical of the general idea behind the device. Think of it: What was the big success behind the iPod? That it could run proprietary AAC files people buy from the music store?

No. That thing didn’t even exist back then. The reason for the success was the total easy (and FAST – remember: Back in the days, we had 1.1 MB/s USB which every MP3 player used vs. 40MB/s Firewire of the iPod) handling and the fact that it was an MP3 player – playing the files everyone already had.

It was a device for playing the content that was available at the time.

The AppleTV in its first incarnation was a device capable of playing content that wasn’t exactly available. Sure it could play the two video podcasts that existed back then (maybe more, but you get the point). And you could buy TV shows and movies in subpar quality on your PC (Windows or Mac) and then transfer them to the device. But the content that was available back then was in a different format: XVID dominated the scene. x264 was a newcomer and MP4 (and mov) wasn’t exactly used.

So what you got was a device, but no content (and the compatible content you had was in subpar quality compared to the incompatible content that was available). And you needed a PC, so it wasn’t exactly a device I could hook to my parents PC for example.

All these things were fixed by Apple today:

  • There is a huge library of content available right here, right now (at least in the US): The new movie rental service. Granted. I think it’s not quite there yet price vs. usability-wise (I think $5 is a totally acceptable price for a movie with unlimited replayability), but at least we have the content.
  • It works without a PC. I can hook this thing up to my parents TV and they can immediately use it.
  • The quality is OK. Actually, it’s more than OK. There is HD content available (though maybe only 720p one, but frankly, on my expensive 1080p projector, I don’t see that much of a difference between 720p and 1080p)
  • It can still access the scarce content that was available before.

The fact that this provides very easy to use video-on-demand to a huge amount of people is what makes me think that this little device is even more of a disruptive technology than the iPod or the iPhone. Think of it: Countless of companies are trying to make people pay for content these days. It’s the telcos, it’s cable companies and it’s device manufacturers. But what do we get? Crappy, constantly crashing devices, which are way too complicated for a non-geek and way too limited in functionality for a geek.

Now we got something that’s perfect for the non-geek. It has the content. It has the ease-of-use. Plug it in, watch your movie. Done. This is what a whole industry tried to do and failed so miserably.

I for my part will still prefer the flexibility given by my custom Windows Media Center solution. I will still prefer the openness provided by illegal copies of movies. I totally refuse to pay multiple times for something just because someone says that I have to. But that’s me.

And even I may sooner or later prefer the comfort of select-now-watch-now to the current procedure (log into private tracker, download torrent, wait for download to finish, watch – torrents are not streamable, even if the bandwith would easily suffice in my case – the packets arrive out of order), so even for me, the AppleTV could be interesting.

This was yet another perfect move by Apple. Ignore the analysts out there who expected more out of this latest keynote. Ignore the bad reception of the keynote by the marked (I hear that Apple stock just dropped a little bit). Ignore all that and listen to yourself: This wonderful device will certainly revolutionize the way we consume video content.

I’m writing this as a constant sceptic – as a person always trying to see a flaw in a certain device. But I’m sure that this time around, they really got it. Nice work!

My PSP just got a whole lot more useful

Or useful at all – considering the games that are available to that console. To be honest: Of all the consoles I have owned in my life, the PSP must be the most underused one. I basically own two games for it: Breath of Fire and Tales of Eternia – not only by this choice of titles, but also by reading this blog, you may notice a certain affinity to Japanese Style RPG’s.

These are the closest thing to a successor of the classical graphic adventures I started my computer career with, minus hard to solve puzzles plus a much more interesting story (generally). So for my taste, these things are a perfect match.

But back to the PSP. It’s an old model – one of the first here in Switzerland. One of the first on the world to be honest: I bought the thing WAAAY back with hopes of seeing many interesting RPG’s – or even just good ports of old classics. Sadly neither really happened.

Then, a couple of days ago, I found a usable copy of the game Lumines. Usable in a sense of when the guy in the store told me that there is a sequel out and I told him that I did not intend to actually play the game, he just blinked with one eye and wished me good luck with my endeavor.

Or in layman’s terms: That particular version of Lumines had a security flaw allowing one to do a lot of interesting stuff with the PSP. Like installing an older, flawed version of the firmware which in turn allows to completely bypass whatever security the PSP would provide.

And now I’m running the latest M33 firmware: 3.71-M4.

What does that mean? It means that the former quite useless device has just become the device of my dreams: It runs SNES games. It runs Playstation 1 games. It’s portable. I can use it in bed without a large assembly of cables, gamepads and laptops. It’s instant-on. It’s optimized for console games. It has a really nice digital directional pad (gone are the days of struggling with diagonally-emphasized joypads – try playing Super Metroid with one of these).

It plays games like Xenogears, Chrono Cross, Chrono Trigger – it finally allows me to enjoy the RPG’s of old in bed before falling asleep. Or in the bathtub. Or whatever.

It’s a real shame that once more I had to resort to legally questionable means to get a particular device to a state I imagine it to be. Why can’t I buy any PS1 game directly from Sony? Why are the games I want to play not even available in Switzerland? Why is it illegal to play the games I want to play? Why are most of the gadgets sold today crippled in a way or another? Why is it illegal to un-cripple our gadgets we bought?

Questions I, frankly, don’t want to answer. For years now I wanted a possibility to play Xenogears in bed and while taking a bath. Now I can, so I’m happy. And playing Xenogears. And loving it like when I was playing through that jewel of game history for the first time.

If I find time, expect some more in-depth articles about the greatness of Xenogears (just kidding – just read the early articles in this blog) or how to finally get your PSP where you want it to be – there are lots of small things to keep in mind to make it work completely satisfactory.

SPAM insanity

<p>I don’t see much point in complaining about SPAM, but it’s slowly but surely reaching complete insanity…</p>

What you see here is the recent history view of my DSPAM – our second line of defense against SPAM.

Red means SPAM. (the latest of the messages was a quite clever phishing attempt which I had to manually reclassify)

To give even more perspective to this: The last genuine Email I received was this morning at 7:54 (it’s now 10 hours later) and even that was just an automatically generated mail from Skype.

To put it into even more perspective: My DSPAM reports that since december 22th, I got 897 SPAM messages and – brace yourself – 170 non-spam messages of which 100 were subversion commit emails and 60 other emails sent from automated cron-jobs.

What I’m asking myself now is: Do these spammers still get anything out of their work? The signal-to-noise ratio has gone down the drain in a manner which can only mean that no person on earth would actually still read through all this spam and even be stupid enough to actually fall for it.

How bad does it have to get before it gets better?

Oh and don’t think that DSPAM is all I’m doing… No… these 897 mails were the messages that passed through both the ix DNSBL and SpamAssassin.

Oh and: Kudos to the DSPAM team. A recognition rate of 99.957% is really, really good

The IE rendering dilemma

There’s a new release of Internet Explorer, aptly named IE8, pending and a whole lot of web developers are in fear of new bugs and no fixes to existing ones. Like the problems we had with IE7.

A couple of really nasty bugs where fixed, but there wasn’t any significant progress in matters of extended support for web standards or even a really significant amount of bugfixes.

And now, so fear the web developers, history is going to repeat itself. Why, are people asking, aren’t they just throwing away the currently existing code-base, replacing it with something more reasonable? Or if licensing or political issues prevent using something not developed in-house, why not rewrite IE’s rendering engine from scratch?

Backwards compatibility. While the web itself has more or less stopped using IE-only-isms and began embracing the way of the web standards (and thus began cursing on IE’s bugs), corporate intranets, the websites accessed by Microsoft’s main customer base, certainly have not.

ActiveX, <FONT>-Tags, VBScript – the list is endless and companies don’t have the time or resources to remedy that. Remember. Rewriting for no real purpose than “being modern” is a real waste of time and certainly not worth the effort. Sure. New Applications can be developped in a standards compliant way. But think about the legacy! Why throw all that away when it works so well in the currently installed base of IE6?

This is why Microsoft can’t just throw away what they have.

The only option I see, aside of trying to patch up what’s badly broken, is to integrated another rendering engine into IE. One that’s standards compliant and one that can be selected by some means – maybe a HTML comment (the DOCTYPE specification is already taken).

But then, think of the amount of work this creates in the backend. Now you have to maintain two completely different engines with completely different bugs at different places. Think of security problems. And think of what happens if one of these buggers is detected in a third party engine a hypothetical IE may be using. Is MS willing to take responsibility of third-party bugs? Is it reasonable to ask them to do this?

To me it looks like we are now paying the price for mistakes MS did a long time ago and for quick technological innovation happening at the wrong time on the wrong platform (imagine the intranet revolution happening now). And personally, I don’t see an easy way out.

I’m very interested in seeing how Microsoft solves this problem. Ignore the standards-crowd? Ignore the corporate customers? Add the immense burden of another rendering engine? FIx the current engine (impossible, IMHO)? We’ll know once IE8 is out I guess.

Closed Source on Linux

One of the developers behind the Linux port of the new Flex Builder for Linux has a blog post about how building closed source software for linux is hard

Mostly, all the problems boil down to the fact that Linux distributors keep patching the upstream source to fit their needs which clearly is a problem rooted in the fact that open source software is, well, open sourced.

Don’t get me wrong. I love the concepts behind free software and in fact, every piece of software I’ve written so far has been open source (aside of most of the code I’m doing for my eployer of course). I just don’t see why every distribution feels the urgue to patch around upstream code, especially as this issue applies to both open- and closed source software projects.

And worse yet: Every distribution adds their own bits and pieces – sometimes doing the same stuff in different ways and thus making it impossible or at least very hard for a third party to create add-ons for a certain package.

What good is a plugin system if the interface works slightly different on each and every distribution?

And think of the time you waste learning configuration files over and over again. To make an example: Some time ago, SuSE delivered an apache server that was using a completely customized configuration file layout, thereby breaking every tutorial and documentation written out there because none of the directives where in the files they are supposed to be.

Other packages are deliberately broken up. Bind for example often comes in two flavors: The server and the client, even though officially, you just get one package. Additionally, every library package these days is broken up in the real library and the development headers. Sometimes the source of these packages may even get patched to support such breaking up.

This creates an incredible mess for all involved parties:

  • The upstream developer gets blamed for bugs she didn’t cause because they were introduced by the packager.
  • Third party developers can’t rely on their plugins or whatever pluggable components to work across distributions if they work upstream
  • Distributions have to do the same work over and over again as new upstream versions are released, thus wasting time better used for other improvements.
  • End users suffer from the general disability of reliably installing precompiled third-party binaries (mysql recommends the use of their binaries, so this even affects open sourced software) and from the inability to follow online-tutorials not written for the particular distribution that’s in use.

This mess must come to an end.

Unfortunately, I don’t know how.

You see: Not all patches created by distributions get merged upstream. Sometimes, political issues prevent a cool feature from being merged, sometimes clear bugs are not recognized as such upstream and sometimes upstream is dead – you get the idea.

Solution like FHS and LSB tried to standardize many aspects of how linux distributions should work in the hope of solving this problem. Bureaucracy and idiotic ideas (german link, I’m sorry) are causing quite the bunch of problems lately, making it hard to impossible to implement the standards. And often the standards don’t specify the latest and greatest parts of current technology.

Personally, I’m hoping that we’ll either end up with one big distribution defining the “state of the art”, with the others being 100% compatible or with distributions switching to pure upstream releases with only their own tools custom-made.

What do you think? What has to change in your opinion?

The new iPods

<p>So we have new iPods.</p> <p>Richard sent me an email asking which model he should buy which made me begin thinking whether to upgrade myself. Especially the new touch screen model seemed compelling to me – at first.</p> <p>Still: I was unable to answer that email with a real recommendation (though honestly, I don’t think it was as much about getting a recommendation than about to letting me know that the models were released and to hear my comments about them) and still I don’t really know what to think.</p> <p>First off: This is a matter of taste, but I hate the new nano design: The screen still is too small to be useful for real video consumption, but it made the device very wide – too wide, I think, to be able to comfortably keep it in my trousers pockets while biking (I may be wrong though).</p> <p>Also, I don’t like the rounded corners very much and the new interface… really… why shrink the menu to half a screen and clutter the rest with some meaningless cover art which only the smallest minority of my files are tagged with.</p> <p>Coverflow feels tucked onto the great old interface and looses a lot of its coolness without the touch screen.</p> <p>They don’t provide any advantage in flash size compared to the older nano models and I think the scroll wheel is way too small compared to the large middle button.</p> <p>All in all, I would never ever upgrade my second generation nano to one of the third generation as they provide no advantage, look (much) worse (IMHO) and seem to have a usability problem (too small a scroll wheel)</p> <p>The iPod classic isn’t interesting for me: Old style hard drives are heavy and fragile and ever since I bought that 4GB nano a long while ago, I noticed that there is no real reason behind having all the music on the device.</p> <p>I’m using my nano way more often than I ever used my old iPod: The nano is lighter and I began listening to podcasts. Still: While I lost HD-based iPods around every year and a half due to faulty hard drives or hard drive connectors, my nano still works as well as it did on the first day.</p> <p>Additionally, the iPod classic shares the strange half-full-screen menu and it’s only available in black or white. Nope. Not interesting. At least for me.</p> <p>The iPod touch is interesting because it has a really interesting user interface. But even there I have my doubts: For one, it’s basically an iPhone without the phone. Will I buy an iPhone when (if) it becomes available in Switzerland? If yes, there’s no need to buy the iPod Touch. If no, there still remains that awful usability problem of touch-screen only devices:</p> <p>You can’t use them without taking them out of your pocket.</p> <p>On my nano, I can play and pause the music (or more often podcast) and I can adjust the volume and I can always see what’s on the screen.</p> <p>On the touch interface, I have to put the screen to standby mode, I can’t do anything without looking at the device and I think it may be a bit bulky all in all.</p> <p>The touch is the perfect bathtub surfing device. It’s the perfect device to surf the web right before or after going to sleep. But it’s not portable.</p> <p>Sure. I can take it with me, but it fails in all the aspects of portability. It’s bulky, it can’t be used without taking it out of your pocket and stopping whatever you are doing, it requires two hands to use (so no changing tracks on the bike any more) and it’s totally useless until you manually turn the display back on and unlock it (which also requires two hands to do).</p> <p>So: Which device should Richard buy? I still don’t know. What I know is that I will not be replacing my second generation Nano as long as it keeps working.</p> <p>The Nano looks awesome, works like a charm and is totally portable. Sure. It can’t play video, but next to none of my videos actually fits the requirement of the video functionality anyways and I don’t see myself recoding already compressed content. That just takes an awful lot of time, greatly degrades the quality and generally is not at all worth the effort.</p>

HD-DVD unlocked

Earlier, it was possible to work around the AACS copy protection scheme in use for HD-DVD and Blueray on a disc-to-disc basis.

Now it’s possible to work around it for every disk.

So once more we are in the situation where the illegal media pirate is getting a superior user experience than the legal user: The “pirate” can download the movie to watch on-demand. He can store it on any storage medium he pleases (like home servers, NASes or optical discs). He can reformat the content to another format a particular output medium requires (like an iPod) without having to buy another copy. And finally, he is capable to watch the stolen media on whatever platform he chooses to watch it with.

The original media on contrast is very much limited:

The source of the content is always the disc the user bought. It’s not possible to store legally acquired HD-content on a different medium than the source disc. It’s not possible to watch it on any personal computer but the ones running operating systems from Microsoft. The disc may even force the legal user to watch advertisements or trailer in advance to the main content. There is no guarantee that a purchased disc will work with any player – despite player and disc both bearing the same compatibility label (HD-DVD or Blueray logos). It’s not possible to legally acquire the content on-demand and it’s impossible to reformat the content to different devices.

Back in the old days, the copy usually was inferior to the original.

In the digital age of DRM and user-money-milking, this has changed. Now the copy clearly provides many advantages the original currently can’t provide or the industry does not want it to provide.

I salute the incredibly smart hackers that worked around yet another “unbreakable” copy protection scheme allowing me to create my personal backup copy of any medium I buy so that I can store the content on my NAS and I have the assurance that I’m able to play it when I want and where I want.

I assure you: My happyness is not based on the fact that I can now downloaded pirated movies over bittorrent. It’s based on the fact that I can store legally purchased HD content on the harddrive of my home server and watch it on-demand without having to switch media.

Piracy, for me, is a pure usability problem.