Asking for permission

Only just last year, I told @brainlock (in real life, so I can’t link) that the coolest thing about our industry was that you don’t have to ask for permission to do anything.

Want to start the next big web project? Just start it. Want to write about your opinions? Just write about them. Want to get famous? It’s still a lot of work and marketing, but nothing (aside of lack of talent) is stopping you.

Whenever you have a good idea for a project, you start working on it, you see how it turns out and you decide whether to continue working on it or whether to scrap it. Aside of a bit of cash for hosting, you don’t need anything else.

This is very cool because is empowers “normal people”. Heck, I probably wouldn’t be where I currently am if it wasn’t for this. Back in 1996 I had no money, I wasn’t known, I had no past experience. What I had though was enthusiasm.

Which is all that’s needed.

Only a year later though, I’m sad to see that we are at the verge of losing all of this. Piece by piece.

First was apple with their iPhone. Even with all the enthusiasm of the world, you are not going to write an app that other people can run on the phone. No. First you will have to ask Apple for permission.

Want to access some third-party hardware from that iPhone app? Sure. But now you have to not only ask Apple, but also the third party vendor for permission.

The explanation we were given is that a malicious app could easily bring down the mobile network. Thus they needed to be careful what we could run on our phones.

But then, we got the iPad with the exact same restrictions even though not all of them even have mobile network access.

The explanation this time? Security.

As nobody wants their machine to be insecure, everybody just accepts it.

Next came Microsoft: In the Windows Mobile days before the release of 7, you didn’t have to ask anybody for permission. You bought (or pirated if you didn’t have money) Visual Studio, you wrote your app, you published it.

All of this is lost now. Now you ask for permission. Now you hope for the powers that be to allow you to write your software.

Finally, you can’t even do what you want with your PC – all because of security.

So there’s still the web you think? I wish I could be positive about that, but as we are running out of IP-addresses and the adoption of IPv6 is slow as ever, I believe that public IP addresses are becoming a scarce good at which point, again, you will be asking for permission.

In some countries, even today, it’s not possible to just write a blog post because the government is afraid of “unrest” (read: losing even more credibility). That’s not just countries we always perceived as “not free” – heck, even in Italy you must register with the government if you want to have a blog (it turns out that law didn’t come to pass – let’s hope no other country has the same bright idea). In Germany, if you read the law by the letter, you can’t blog at all without getting every post approved – you could write
something that a minor might see.

«But permission will be granted anyways», you might say. Are you sure though? What if you are a minor wanting to create an application for your first client? Back in my days, I could just do it. Are you sure that whatever entity is going to have to give permission wan’t to do business with minors? You do know that you can’t have a Gmail account if you are younger than 13 years, do you? So age barriers exist.

What if your project competes with whatever entity has to give permission? Remember the story about the Google Voice app? Once we are out of IP addresses, the big provider and media companies who still have addresses might see you little startup web project as competition in some way. Are you sure you will still get permission?

Back in 1996 when I started my company in High-School, all you needed to earn your living was enthusiasm and a PC (yes – I started doing web programming without having access to the internet)

Now you need signed contracts, signed NDAs, lobbying, developer program memberships, cash – the barriers to entry are infinitely higher at this point.

I’m afraid though, that this is just the beginning. If we don’t stand up now, if we continue to let big companies and governments take away our freedom of expression piece by piece, if we give up more and more of our freedom because of the false promise of security, then, at one point, all of what we had will be lost.

We won’t be able to just start our projects. We won’t be able to create – only to work on other peoples projects. We will lose all that makes our profession interesting.

Let’s not go there.

Please.

Discussion on HackerNews

AJAX, Architecture, Frameworks and Hacks

Today I was talking with @brainlock about JavaScript, AJAX and Frameworks and about two paradigms that are in use today:

The first is the “traditional” paradigm where your JS code is just glorified view code. This is how AJAX worked in the early days and how people are still using it. Your JS-code intercepts a click somewhere, sends an AJAX request to the server and gets back either more JS code which just gets evaulated (thus giving the server kind of indirect access to the client DOM) or a HTML fragment which gets inserted at the appropriate spot.

This means that your JS code will be ugly (especially the code coming from the server), but it has the advantage that all your view code is right there where all your controllers and your models are: on the server. You see this pattern in use on the 37signals pages or in the github file browser for example.

Keep the file browser in mind as I’m going to use that for an example later on.

The other paradigm is to go the other way around an promote JS to a first-class language. Now you build a framework on the client end and transmit only data (XML or JSON, but mostly JSON these days) from the server to the client. The server just provides a REST API for the data plus serves static HTML files. All the view logic lives only on the client side.

The advantages are that you can organize your client side code much better, for example using backbone, that there’s no expensive view rendering on the server side and that you basically get your third party API for free because the API is the only thing the server provides.

This paradigm is used for the new twitter webpage or in my very own tempalias.com.

Now @brainlock is a heavy proponent of the second paradigm. After being enlightened by the great Crockford, we both love JS and we both worked on huge messes of client-side JS code which has grown over the years and lacks structure and feels like copy pasta sometimes. In our defense: Tons of that code was written in the pre-enlightened age (2004).

I on the other hand see some justification for the first pattern aswell and I wouldn’t throw it away so quickly.

The main reason: It’s more pragmatic, it’s more DRY once you need graceful degradation and arguably, you can reach your goal a bit faster.

Let me explain by looking at the github file browser:

If you have a browser that supoports the HTML5 history API, then a click on a directory will reload the file list via AJAX and at the same time the URL will be updated using push state (so that the current view keeps its absolute URL which is valid even after you open it in a new browser).

If a browser doesn’t support pushState, it will gracefully degrade by just using the traditional link (and reloading the full page).

Let’s map this functionality to the two paradigms.

First the hacky one:

  1. You render the full page with the file list using a server-side template
  2. You intercept clicks to the file list. If it’s a folder:
  3. you request the new file list
  4. the server now renders the file list partial (in rails terms – basically just the file list part) without the rest of the site
  5. the client gets that HTML code and inserts it in place of the current file list
  6. You patch up the url using push state

done. The view code is only on the server. Whether the file list is requested using the AJAX call or the traditional full page load doesn’t matter. The code path is exactly the same. The only difference is that the rest of the page isn’t rendered in case of an AJAX call. You get graceful degradation and no additional work.

Now assuming you want to keep graceful degradation possible and you want to go the JS framework route:

  1. You render the full page with the file list using a server-side template
  2. You intercept the click to the folder in the file list
  3. You request the JSON representation of the target folder
  4. You use that JSON representation to fill a client-side template which is a copy of the server side partial
  5. You insert that HTML at the place where the file list is
  6. You patch up the URL using push state

The amount of steps is the same, but the amount of work isn’t: If you want graceful degradation, then you write the file list template twice: Once as a server-side template, once as a client-side template. Both are quite similar but usually you’ll be forced to use slightly different syntax. If you update one, you have to update the other or the experience will be different whether you click on a link or you open the URL directly.

Also you are duplicating the code which fills that template: On the server side, you use ActiveRecord or whatever other ORM. On the client side, you’d probably use Backbone to do the same thing but now your backend isn’t the database but the JSON response. Now, Backbone is really cool and a huge timesaver, but it’s still more work than not doing it at all.

OK. Then let’s skip graceful degradation and make this a JS only client app (good luck trying to get away with that). Now the view code on the server goes away and you are just left with the model on the server to retrieve the data, with the model on the client (Backbone helps a lot here, but there’s still a substatial amount of code that needs to be written that otherwise wouldn’t) and with the view code on the client.

Now don’t ge me wrong.

I love the idea of promoting JS to a first class language. I love JS frameworks for big JS only applications. I love having a “free”, dogfooded-by-design REST API. I love building cool architectures.

I’m just thinking that at this point it’s so much work doing it right, that the old ways do have their advantages and that we should not condemn them for being hacky. True. They are. But they are also pragmatic.

overpriced data roaming

You shouldn’t complain if something gets cheaper. But if something just gets 7 times cheaper from one day to the next, then that leaves you thinking whether the price offered so far might have been a tad bit too high.

I’m talking about Swisscom’s data roaming charges.

Up to now, you paid CHF 50 per 5 MB (CHF 10 per MB) when roaming in the EU. Yes. That’s around $10 and EUR 6.60 per Megabyte. Yes. Megabyte. Not Gigabyte. And you people complain about getting limited to 5 GB for your $30.

Just now I got a press release form Swisscom that they are changing their roaming charges to CHF 7 per 5 MB. That’s CHF 1.40 per MB which is 7 times cheaper.

If you can make a product of yours 7 times cheaper from one day to the other, the rates you charged before that were clearly way too high.

Why node.js excites me

Today, on Hacker News, an article named “Why node.js disappoints me” appeared – right on the day I returned back from jsconf.eu (awesome conference. Best days of my life, I might add) where I was giving a talk about using node.js for a real web application that provides real use: tempalias.com

Time to write a rebuttal, I guess.

The main gripe Eric has with node is a gripe with the libraries that are available. It’s not about performance. It’s not about ease of deployment, or ease of development. In his opinion, the libraries that are out there at the moment don’t provide anything new compared to what already exists.

On that level, I totally agree. The most obvious candidates for development and templating try to mimik what’s already out there for other platforms. What’s worse: There seems to be no real winner and node itself doesn’t seem to make a recommendation or even include something with the base distribution.

This is inherently a good thing though. Node.js isn’t your complete web development stack. Far from it.

Node is an awesome platform to very easily write very well performing servers. Node is an awesome platform to use for your daily shell scripting needs (allowing you to work in your favorite language even for these tasks). Node isn’t about creating awesome websites. It’s about giving you the power to easily build servers. Web, DNS, SMTP – we’ve seen all.

To help you with web servers and probably to show us users how it’s done, node also provides a very good library to interact with the HTTP protocol. This isn’t about generating web pages. This isn’t about URL routing, or MVC or whatever. This is about writing a web server. About interracting with HTTP clients. Or HTTP servers. On the lowest level.

So when comparing node with other platforms, you must be careful to compare apple with apples. Don’t compare pure node.js to rails. Compare it to mod_wsgi, to fastcgi, to a servlet container (if you must) or to mod_php (the module that allows a script of yours access to server internals. Not the language) or mod_perl.

In that case, consider this. With node.js you don’t worry about performance, you don’t worry about global locks (you do worry about never blocking though), and you really, truly and most awesomely don’t worry about race conditions.

Assuming

    var a = 0;
    var f = function(){
        var t = a; // proving a point here. I know it's not needed
        a = t + 1;
    }
    setTimeout(f, 100);
    setTimeout(f, 100);

you’d always end up with a === 2 once both timeouts have executed. There is no interruption between the assignment of t and the increment. No worries about threading. No hours wasted trying to find out why a suddenly (and depending on the load on your system) is either 1, 2 or 3.

In the years we got experience in programming, we learned that what f does in my example above is a bad thing. We feel strange when typing code like this – seeking for any method of locking, of specifying a critical section. With node, there’s no need to.

This is why writing servers (remember: highly concurrent access to potentially the same code) is so much fun in node.

The perfect little helpers that were added to deal with the HTTP protocol are just the icing on the cake, but in so many other frameworks (cough WSGI cough) stuff like chunking, multipart parsing, even just reading the client’s data from an input stream are hard if you do them on your own, or completely beyond your control if you let the libraries do them.

With node you get to the knobs to turn in the easiest way possible.

Now we know that we can easily write well performing servers (of any kind with special support for HTTP) in node, so let’s build a web site.

In traditional frameworks, your first step would be to select a framework (because the HTTP libraries are so effing (technical term) hard to use).

You’d end up with something lightweight like, say, mnml or werkzeug in python or something more heavy like rails for ruby (though rack isn’t nearly as bad as wsgi) or django for python. You’d add some kind of database abstraction or even ORM layer – maybe something that comes with your framework.

Sure. You could do that in node too. There are frameworks around.

But remember: Node is an awesome tool for you to write highly specialized servers.

Do you need to build your whole site in node?

Do you see this as a black or white situation?

Over the last year, I’ve done two things.

One is to layout a way how to augment an existing application (PHP, PostgreSQL) with a WebSocket based service using node to greatly reduce the load on the existing application. I didn’t have time to implement this yet, but it would work wonders.

The other thing was to prove a point and to implement a whole web application in node.

I built tempalias.com

At first I fell into the same trap that anybody coming from the “old world” would be falling. I selected what seemed to be the most used web framework (Express) and rolled with that, but I soon found out that I have it all backwards.

I don’t want to write the 50iest web application. I wanted to do something else. Something new.

When you look at the tempalias source code (yeah – the whole service is open source so all of us can learn from it), you’ll notice that no single byte of HTML is dynamically generated.

I ripped out Express. I built a RESTful API for the main functionality of the site: Creating aliases. I built a server that does just that and nothing more.

I leveraged all the nice features JavaScript as a language provides me with to build a really cool backend. I used all the power that node provides me with to build a really cool (and simple!) server to web-enable that API (posting and reading JSON to and from the server)

The web client itself is just a client to that API. No single byte of that client is dynamically generated. It’s all static files. It’s using Sammy, jQuery, HTML and CSS to do its thing, but it doesn’t do anything the API I built on node doesn’t expose.

Because it’s static HTML, I could serve that directly from nginx I’m running in front of node.

But because I wanted the service to be self-contained, I plugged in node-paperboy to serve the static files from node too.

Paperboy is very special and very, very cool.

It’s not trying to replace node’s HTTP library. It’s not trying to abstract away all the niceties of node’s excellent HTTP support. It’s not even trying to take over the creation of the actual HTTP server. Paperboy is just a function you call with the request and response object you got as part of node’s HTTP support.

Whether you want to call it or not is your decision.

If you want to handle the request, you handle it.

If you don’t, you pass it on to paperboy.

Or foobar.

Or whatever.

Node is the UNIX of the tools to build servers with: It provides small dedicated tools that to one task, but truly, utterly excel at doing so.

So the libraries you are looking for are not the huge frameworks that do everything but just the one bit you really need.

You are looking for the excellent small libraries that live the spirit of node. You are looking for libraries that do one thing well. You are looking for libraries like paperboy. And you are relying on the excellent HTTP support to build your own libraries where the need arises.

It’s still very early in node’s lifetime.

You can’t expect everything to be there, ready to use it.

For some cases, that’s true. Need a DNS server? You can do that. Need an SMTP daemon? Easy. You can do that. Need a HTTP server that understands the HTTP protocol really well and provides excellent support to add your own functionality? Go for it.

But above all: You want to write your server in a kick-ass language? You want to never have to care about race conditions when reading, modifying and writing to a variable? You want to be sure not to waste hours and hours of work debugging code that looks right but isn’t?

Then node is for you.

It’s no turnkey solution yet.

It’s up to you to make the most out of it. To combine it with something more traditional. Or to build something new, maybe rethinking how you approach the problem. Node can help you to provide an awesome foundation to build upon. It alone will never provide you with a blog in 10 minutes. Supporting libraries don’t at this time provide you with that blog.

But they empower you to build it in a way that withstands even the heaviest pounding, that makes the most out of the available resources and above all, they allow you to use your language of choice to do so.

JavaScript.

Sticking to the iPhone

Recently, I got a chance to play around with a Nexus One phone and I was using it as my main phone with the intent to use it as my new main phone. I had enough of the lack of background apps and the closedness of the iPhone, so I thought, I should really go through with this.

Unfortunately though, this didn’t work out so well.

People who haven’t tried both devices would probably never understand this, but the Nexus One touch screen is really, really bad. The bit of squigglyness you see on the picture in the linked article seems like no big deal, but after one week of Nexus One and then going back to the iPhone, you can’t imagine how smooth it feels to use the iPhone again.

It’s like being in a very noisy environment and then stepping back into a quiet one.

Why did I try the iPhone again?

While I got Podcast listening to work correctly on the Android phone, I noticed that a lot of my commuting time is not just spent by listening to podcasts, but that some games (currently Doodle Jump and Plants vs. Zombies) play a huge role too and the supply of games on the Android plattform is really, really bad.

And don’t get me started on the keyboard: Neither the built-in one nor the one I had switched to even comes close to what the iPhone provides. I’m about 5 times as fast on the iPhone than on the Android. Worse: After switching to the Nexus One, I again began dreading having to write SMSes which usually spells death to any phone for me.

Speaking of keyboard: The built-in one is completely unusable for multilingual people: The text I write on a phone is about 50% english and 50% german. The Android keyboard doesn’t allow switching the language on the fly (while the english and german keyboards are quite alike, the keyboard language also determines the auto correction language), and it couples the keyboard language to the phone UI language.

This is really bad, as over the years I bacame so accustomed to english UIs that I frankly cannot work with german UIs any more – also because of the usually really bad translations. Eek.

So, let’s tally.

iPhone Android Device
Advantages
  • Working touch screen
  • Smoother graphics and thus more fluent usage.
  • Never crashes
  • Apps I learned to depend on are available (Wemlin, Doodle Jump […])
  • No background noise in the headphones
  • Background-Applications (I wanted this for working IM as the notification based solutions on the iPhone never seemed to work)
  • Built-in applications can be replaced at will
  • Ability to buzz pictures (yeah. I know. Who needs this?)
  • On-the-fly podcast download.
Disadvantages
  • Can’t replace internal apps by better ones
  • Needs iTunes to download podcasts
  • No background apps
  • No buzzing of pictures (at least not if you want a location attached to your buzz)
  • Really bad touch screen (jumpy, inaccurate, sometimes losing calibration until I reboot it)
  • Very mediocre applications available
  • UI sometimes slow
  • Very bad battery life (doesn’t make it through one day even when not heavily used)
  • Crashes about once a day
  • Did I already write “really bad touch screen” – I guess I did, but: “really bad touch screen”
  • Sometimes really bad, sometimes just bad background noise in the headphones. According to HTC, this can be fixed by periodically turning off the phone and removing the battery(!).
  • No audible support (I know I could probably remove the DRM, but why bother at the moment?)

While I thought I could live with the touch screen, the moment I turned on the iPhone again to play a round of “Plants vs. Zombies” that just came out for the i-Devices, I’ve seen how a touch screen is supposed to work and I could not bring myself around to going back, but I still wanted some of the one big iPhone disadvantage, which is lack of non-SMS-based messaging fixed for me, so here’s what I’ve done:

  • WhatsApp on the iPhone works really well as an SMS replacement (something I was after for a very long time)
  • meebo so far never disconnected me on the iPhone which is something all other iPhone IM clients have done for me – and even on the android, meebo tended to disconnect and not reconnect.

For me, that’s it. No more experiments. What ever I tried to get away from Apple’s dictate, it always failed. The N900 is a geeks heaven but doesn’t support my expensive in-ear iPhone headset and doesn’t provide any halfway interesting games. Android has a bad touchscreen, next to no battery life, is slow and crashy.

It’s really hard to admit for me as a geek and strong believer in freedom to use something I bought for whatever purpose I want to use it for, but Apple, even after two years, still rules the phone market in usability and hardware build quality.

Can’t wait to see what the next iteration of the iPhone will be, though they don’t have to change anything as long as their competition still thinks it’s ok to save $2 on each phone by using a crappy touchscreen and a crappy battery.

JSONP. Compromised in 3…2…1…

To embed a vimeo video on some page, I had a look at their different methods for embedding and the easiest one seemed to be what is basically JSONP – a workaround for the usual restriction of disallowing AJAX over domain boundaries.

But did you know, that JSONP not only works around the subdomain restriction, it basically is one huge cross site scripting exploit and there’s nothing you can do about it?

You might have heard this and you might have found articles like this one thinking that using libraries like that would make you save. But that’s an incorrect assumption. The solution provided in the article has it backwards and only helps to protect the originating site against itself, but it does not help at all to protect the calling site from the remote site.

You see, the idea behind JSONP is that you source the remote script using <script src=”http://remote-service.example.com/script.js”> and the remote script then (after being loaded into your page and thus being part of your page) is supposed to call some callback of the original site (from a browsers standpoint it is part the original site).

The problem is that you do not get control over the loading let alone content of that remote script. Because the cross-domain restrictions prevent you from making an AJAX request to a remote server, you are using the native HTML methods for cross domain requests (which should not have been allowed in the first place) and at that moment you relinquish all control over your site as that remotely loaded script runs in the context of your page, which is how you get around the cross domain restrictions – by loading the remote script into your page and executing it in the context of your page.

Because you never see that script until it is loaded, you cannot control what it can do.

Using JSONP is basically subjecting yourself to an XSS attack by giving the remote end complete control over your page.

And I’m not just talking about malicious remote sites… what if they themselves are vulnerable to some kind of attack? What if they were the target of a successful attack? You can’t know and once you do know it’s too late.

This is why I would recommend you never to rely on JSONP and find other solutions for remote scripting: Use a local proxy that does sanitization (i.e. strict JSON parsing which will save you), rely on cross-domain messaging that was added in later revisions of the upcoming HTML5 standard.

Twisted Tornado

Lately, the net is all busy talking about the new web server released by FriendFeed last week and how their server basically does the same thing as the Twisted framework that was around so much longer. One blog entry ends with

Why Facebook/Friendfeed decided to create a new web server is completely beyond us.

Well. Let me add my two cents. Not from a Python perspective (I’m quite the Python newbie, only having completed one bigger project so far), but from a software development perspective. I feel qualified to add the cents because I’ve been there and done that.

When you start any project, you will be on the lookout for a framework or solution to base your work on. Often times, you already have some kind of idea of how you want to proceed and what the different requirements of your solution will be.

Of course, you’ll be comparing existing requirements against the solutions around, but chances are that none of the existing solutions will match your requirements exactly, so you will be faced with changing them to match.

This involves not only the changes themselves but also other considerations:

  • is it even possible to change an existing solution to match your needs?
  • if the existing solution is an open source project, is there a chance of your changes being accepted upstream (this is not a given, by the way).
  • if not, are you willing to back- and forward-port your changes as new upstream versions get released? Or are you willing to stick with the version for eternity, manually back-porting security-issues?

and most importantly

  • what takes more time: Writing a tailor-made solution from scratch or learning how the most-matching solutions ticks to make it do what you want?

There is a very strong perception around, that too many features mean bloat and that a simpler solution always trumps the complex one.

Have a look at articles like «Clojure 1, PHP 0» which compares a home-grown, tailor-made solution in one language to a complete framework in another and it seems to favor the tailor-made solution because it was more performant and felt much easier to maintain.

The truth is, you can’t have it both ways:

Either you are willing to live with «bloat» and customize an existing solution, adding some features and not using others, or you are unwilling to accept any bloat and you will do a tailor-made solution that may be lacking in features, may reimplement other features of existing solutions, but will contain exactly the features you want. Thus it will not be «bloated».

FriendFeed decided to go the tailor-made route but instead of many other projects each day who go the tailor made route (take Django’s reimplementations of many existing Python technologies like templating and ORM as another example) and keep using that internally, they actually went public.

Not with the intention to bad-mouth Twisted (though it kinda sounded that way due to bad choice of words), but with the intention of telling us: «Hey – here’s the tailor-made implementation which we used to solve our problem – maybe it is or parts of it are useful to you, so go ahead and have a look».

Instead of complaining that reimplementation and a bit of NIH was going on, the community could embrace the offering and try to pick the interesting parts they see fitting for their implementation(s).

This kind of reinventing the wheel is a standard process that is going on all the time, both in the Free Software world as in the commercial software world. There’s no reason to be concerned or alarmed. Instead we should be thankful for the groups that actually manage to put their code out for us to see – in so many cases, we never get a chance to see it and thus lose a chance at making our solutions better.

(Unicode-)String handling done right

Today, found myself reading the chapter about strings on diveintopython3.org.

Now, I’m no Python programmer by any means. Sure. I know my share of Python and I really like many of the concepts behind the language. I have even written some smaller scripts in Python, but it’s not my day-to-day language.

That chapter about string handling really really impressed me though.

In my opinion, handling Unicode strings they way python 3 is doing is exactly how it should be done in every development environment: Keep strings and collections of bytes completely separate and provide explicit conversion functions to convert from one to the other.

And hide the actual implementation from the user of the language! A string is a collection of characters. I don’t have to care how these characters are stored in memory and how they are accessed. When I need that information, I will have to convert that string to a collection of bytes, giving an explicit encoding how I want that to be done.

This is exactly how it should work, but implementation details leaking into the language are mushing this up in every other environment I know of making it a real pain to deal with multibyte character sets.

Features like this is what convinces me to look into new stuff. Maybe it IS time to do more python after all.

All-time favourite tools – update

It has been more than four years since I’ve last talked about my all-time favourite tools. I guess it’s time for an update.

Surprisingly, I still stand behind the tools listed there: My love for Exim is still un-changed (it just got bigger lately – but that’s for another post). PostgreSQL is cooler than ever and powers PopScan day-in, day-out without flaws.

Finally, I’m still using InnoSetup for my Windows Setup programs, though that has lost a bit of importance in my daily work as we’re shifting more and more to the web.

Still. There are two more tools I must add to the list:

  • jQuery is a JavaScript helper libary that allows you to interact with the DOM of any webpage, hiding away browser incompatibilities. There are a couple of libraries out there which do the same thing, but only jQuery is such a pleasure to work with: It works flawlessly, provides one of the most beautiful APIs I’ve ever seen in any library and there are tons and tons of self-contained plug-ins out there that help you do whatever you would want to on a web page.
    jQuery is an integral part of making web applications equivalent to their desktop counterparts in matters of user interface fluidity and interactivity.
    All while being such a nice API that I’m actually looking forward to do the UI work – as opposed to the earlier days which can most accurately be described as UI sucks.
  • git is my version control system of choice. There are many of them out there in the world and I’ve tried the majority of them for one thing or another. But only git combines the awesome backwards-compatibility to what I’ve used before and what’s still in use by my coworkers (SVN) with abilities to beautify commits, have feature branches, very high speed of execution and very easy sharing of patches.
    No single day passes without me using git and running into a situation where I’m reminded of the incredible beauty that is git.

In four years, I’ve not seen one more other tool I’ve as consistenly used with as much joy as git and jQuery, so those two certainly have earned their spot in my heart.

The consumer loses once more

DRM strikes again. This time, apparently, the PC version of Gears of War stopped working. This time it seems to be caused by an expired certificate.

Even though I do not play Gears of War, I take issue in this because of a multitude of problems:

First, it’s another reason where DRM does nothing to stop piracy but punishes the honest user for buying the original – no doubt, the cracked versions of the game will continue to work due to the stripped out certificate check.

Second, using any form of DRM with any type of media is incredibly shortsighted if it requires any external support to work correctly. Be it a central authorization server, be it a correct clock – you name it. Sooner or later you won’t sell any more of your media and thus you will shut your DRM servers down, screwing the most loyal of your customers.

This is especially apparent with the games market. Like no other market, there exists a really vivid and ever growing community of retro gamers. Like no other type of media, games seem to make users to want to go back to them and see them again – even after ever so many years.

Older games are speedrunned, discussed and even utterly destroyed. Even if the count in players declines over the years, it will never reach zero.

Now imagine DRM in all those old games once you turn off the DRM server or a certificate expires: No more speedruns. No more discussion forums. Nothing. The games are devalued and you as a game producer shut out your most loyal customers (those that keep playing your game after  many years).

And my last issue is with this Gears of War case in particular: A time limited certificate does not make any sense in this case. It’s identity that must be checked. Let’s say the AES key used to encrypt the game was encrypted with the private key of the publisher (thus the public key will be needed to decrypt it) and the public key is signed by the publishers CA, then, while you check the identity of the publishers certificate, checking the time certainly is not needed. If it was valid once, it’s probably valid in the future as well.

Or better: A cracker with the ability to create certificates that look like they were signed by the publisher will highly likely also be able to make them with any timed validity.

This issue here is that Gears of War probably uses some library function to check for the certificate and this library function also checks the timestamp on the certificate. The person that issued the certificate either thought that “two years is well enough” or just used the default value in their software.

The person using the library function just uses that, not thinking about the timestamp at all.

Maybe, the game just calls some third-party DRM library which in turn calls the X.509 certificate validation routines and due to “security by obscurity” doesn’t document how the DRM works, thus not even giving the developer (or certificate issuer) any chance to see that the game will stop working once the certificate runs out.

This is lazyness.

So it’s not just monetary issues that would lead to DRMed stuff stop working. It’s also lazyness and wrong sense of security.

DRM is doomed to fail and the industry finally needs to see that.