AJAX, Architecture, Frameworks and Hacks

Today I was talking with @brainlock about JavaScript, AJAX and Frameworks and about two paradigms that are in use today:

The first is the “traditional” paradigm where your JS code is just glorified view code. This is how AJAX worked in the early days and how people are still using it. Your JS-code intercepts a click somewhere, sends an AJAX request to the server and gets back either more JS code which just gets evaulated (thus giving the server kind of indirect access to the client DOM) or a HTML fragment which gets inserted at the appropriate spot.

This means that your JS code will be ugly (especially the code coming from the server), but it has the advantage that all your view code is right there where all your controllers and your models are: on the server. You see this pattern in use on the 37signals pages or in the github file browser for example.

Keep the file browser in mind as I’m going to use that for an example later on.

The other paradigm is to go the other way around an promote JS to a first-class language. Now you build a framework on the client end and transmit only data (XML or JSON, but mostly JSON these days) from the server to the client. The server just provides a REST API for the data plus serves static HTML files. All the view logic lives only on the client side.

The advantages are that you can organize your client side code much better, for example using backbone, that there’s no expensive view rendering on the server side and that you basically get your third party API for free because the API is the only thing the server provides.

This paradigm is used for the new twitter webpage or in my very own tempalias.com.

Now @brainlock is a heavy proponent of the second paradigm. After being enlightened by the great Crockford, we both love JS and we both worked on huge messes of client-side JS code which has grown over the years and lacks structure and feels like copy pasta sometimes. In our defense: Tons of that code was written in the pre-enlightened age (2004).

I on the other hand see some justification for the first pattern aswell and I wouldn’t throw it away so quickly.

The main reason: It’s more pragmatic, it’s more DRY once you need graceful degradation and arguably, you can reach your goal a bit faster.

Let me explain by looking at the github file browser:

If you have a browser that supoports the HTML5 history API, then a click on a directory will reload the file list via AJAX and at the same time the URL will be updated using push state (so that the current view keeps its absolute URL which is valid even after you open it in a new browser).

If a browser doesn’t support pushState, it will gracefully degrade by just using the traditional link (and reloading the full page).

Let’s map this functionality to the two paradigms.

First the hacky one:

  1. You render the full page with the file list using a server-side template
  2. You intercept clicks to the file list. If it’s a folder:
  3. you request the new file list
  4. the server now renders the file list partial (in rails terms – basically just the file list part) without the rest of the site
  5. the client gets that HTML code and inserts it in place of the current file list
  6. You patch up the url using push state

done. The view code is only on the server. Whether the file list is requested using the AJAX call or the traditional full page load doesn’t matter. The code path is exactly the same. The only difference is that the rest of the page isn’t rendered in case of an AJAX call. You get graceful degradation and no additional work.

Now assuming you want to keep graceful degradation possible and you want to go the JS framework route:

  1. You render the full page with the file list using a server-side template
  2. You intercept the click to the folder in the file list
  3. You request the JSON representation of the target folder
  4. You use that JSON representation to fill a client-side template which is a copy of the server side partial
  5. You insert that HTML at the place where the file list is
  6. You patch up the URL using push state

The amount of steps is the same, but the amount of work isn’t: If you want graceful degradation, then you write the file list template twice: Once as a server-side template, once as a client-side template. Both are quite similar but usually you’ll be forced to use slightly different syntax. If you update one, you have to update the other or the experience will be different whether you click on a link or you open the URL directly.

Also you are duplicating the code which fills that template: On the server side, you use ActiveRecord or whatever other ORM. On the client side, you’d probably use Backbone to do the same thing but now your backend isn’t the database but the JSON response. Now, Backbone is really cool and a huge timesaver, but it’s still more work than not doing it at all.

OK. Then let’s skip graceful degradation and make this a JS only client app (good luck trying to get away with that). Now the view code on the server goes away and you are just left with the model on the server to retrieve the data, with the model on the client (Backbone helps a lot here, but there’s still a substatial amount of code that needs to be written that otherwise wouldn’t) and with the view code on the client.

Now don’t ge me wrong.

I love the idea of promoting JS to a first class language. I love JS frameworks for big JS only applications. I love having a “free”, dogfooded-by-design REST API. I love building cool architectures.

I’m just thinking that at this point it’s so much work doing it right, that the old ways do have their advantages and that we should not condemn them for being hacky. True. They are. But they are also pragmatic.

DNSSEC to fix the SSL mess?

After Firesheep it has become clear that there’s no way around SSL.

But still many people (and I’m including myself) are unhappy with the fact that to roll out SSL, you basically have to pay a sometimes significant premium for the certificate. And that’s not all: You have to pay the same fee every n years (and while you could say that the CA does some work the first time, every following year, it’s plain sucking money from you) and you have to remember to actually do it unless you want embarrassing warnings pop up to your users.

The usual suggestion is to make browsers accept self-signed certificates without complaining, but that doesn’t really work to prevent a Firesheep style attack and is arguably even worse as it would allow not only your session id, but also your password to leak from sites that use the traditional SSL-for-login-HTTP-afterwards mechanism.

See my comment on HackerNews for more details.

To make matters worse, last week news about a CA being compromised and issuing fraudulent (but still trusted) certificates made the rounds, so now even with the current CA based security mechanism, we still can’t completely trust the infrastructure.

Thinking about this, I had an idea.

Let’s assume that one day, one glorious day, DNSSEC will actually be deployed.

If that’s the case, then if I was the owner of gnegg.ch, I could just publish the certificate (or its fingerprint or a link to the certificate over SSL) in the DNS as a TXT record. DNSSEC would ensure that it was the owner of the domain who created the TXT entry and that the domain is the real one and not a faked one.

So if that entry says that gnegg.ch is supposed to serve a certificate with the fingerprint 0xdeadbeef, then a connecting browser would be sure that if the site is serving that certificate (and has the matching private key), then the connection would be secure and not man-in-the-middle’d.

Even better: If I lose the private key of gnegg.ch, I would just update the TXT record, making the old key useless. No non-working CRL or OCSP. Just one additional DNS query.

And you know what? It would put CAs out of business for signing of site certificates as a self-signed certificate would be as good as an official one (they would still be needed to sign your DNSSEC zone file of course, but that could be done by the TLD owners).

Oh and by the way: I could create my certificate with an incredibly long (if ever) expiration time: If I want the certificate to be invalid, I remove or change the TXT record and I’m done. As simple as that. No more embarrassing warnings. No more fear of missing the deadline.

Now, this feels so incredibly simple that there must be something I’m missing. What is it? Is it just that politics is preventing DNSSEC from ever being real? Is there an error in my thinking?