AJAX, Architecture, Frameworks and Hacks

Today I was talking with @brainlock about JavaScript, AJAX and Frameworks and about two paradigms that are in use today:

The first is the “traditional” paradigm where your JS code is just glorified view code. This is how AJAX worked in the early days and how people are still using it. Your JS-code intercepts a click somewhere, sends an AJAX request to the server and gets back either more JS code which just gets evaulated (thus giving the server kind of indirect access to the client DOM) or a HTML fragment which gets inserted at the appropriate spot.

This means that your JS code will be ugly (especially the code coming from the server), but it has the advantage that all your view code is right there where all your controllers and your models are: on the server. You see this pattern in use on the 37signals pages or in the github file browser for example.

Keep the file browser in mind as I’m going to use that for an example later on.

The other paradigm is to go the other way around an promote JS to a first-class language. Now you build a framework on the client end and transmit only data (XML or JSON, but mostly JSON these days) from the server to the client. The server just provides a REST API for the data plus serves static HTML files. All the view logic lives only on the client side.

The advantages are that you can organize your client side code much better, for example using backbone, that there’s no expensive view rendering on the server side and that you basically get your third party API for free because the API is the only thing the server provides.

This paradigm is used for the new twitter webpage or in my very own tempalias.com.

Now @brainlock is a heavy proponent of the second paradigm. After being enlightened by the great Crockford, we both love JS and we both worked on huge messes of client-side JS code which has grown over the years and lacks structure and feels like copy pasta sometimes. In our defense: Tons of that code was written in the pre-enlightened age (2004).

I on the other hand see some justification for the first pattern aswell and I wouldn’t throw it away so quickly.

The main reason: It’s more pragmatic, it’s more DRY once you need graceful degradation and arguably, you can reach your goal a bit faster.

Let me explain by looking at the github file browser:

If you have a browser that supoports the HTML5 history API, then a click on a directory will reload the file list via AJAX and at the same time the URL will be updated using push state (so that the current view keeps its absolute URL which is valid even after you open it in a new browser).

If a browser doesn’t support pushState, it will gracefully degrade by just using the traditional link (and reloading the full page).

Let’s map this functionality to the two paradigms.

First the hacky one:

  1. You render the full page with the file list using a server-side template
  2. You intercept clicks to the file list. If it’s a folder:
  3. you request the new file list
  4. the server now renders the file list partial (in rails terms – basically just the file list part) without the rest of the site
  5. the client gets that HTML code and inserts it in place of the current file list
  6. You patch up the url using push state

done. The view code is only on the server. Whether the file list is requested using the AJAX call or the traditional full page load doesn’t matter. The code path is exactly the same. The only difference is that the rest of the page isn’t rendered in case of an AJAX call. You get graceful degradation and no additional work.

Now assuming you want to keep graceful degradation possible and you want to go the JS framework route:

  1. You render the full page with the file list using a server-side template
  2. You intercept the click to the folder in the file list
  3. You request the JSON representation of the target folder
  4. You use that JSON representation to fill a client-side template which is a copy of the server side partial
  5. You insert that HTML at the place where the file list is
  6. You patch up the URL using push state

The amount of steps is the same, but the amount of work isn’t: If you want graceful degradation, then you write the file list template twice: Once as a server-side template, once as a client-side template. Both are quite similar but usually you’ll be forced to use slightly different syntax. If you update one, you have to update the other or the experience will be different whether you click on a link or you open the URL directly.

Also you are duplicating the code which fills that template: On the server side, you use ActiveRecord or whatever other ORM. On the client side, you’d probably use Backbone to do the same thing but now your backend isn’t the database but the JSON response. Now, Backbone is really cool and a huge timesaver, but it’s still more work than not doing it at all.

OK. Then let’s skip graceful degradation and make this a JS only client app (good luck trying to get away with that). Now the view code on the server goes away and you are just left with the model on the server to retrieve the data, with the model on the client (Backbone helps a lot here, but there’s still a substatial amount of code that needs to be written that otherwise wouldn’t) and with the view code on the client.

Now don’t ge me wrong.

I love the idea of promoting JS to a first class language. I love JS frameworks for big JS only applications. I love having a “free”, dogfooded-by-design REST API. I love building cool architectures.

I’m just thinking that at this point it’s so much work doing it right, that the old ways do have their advantages and that we should not condemn them for being hacky. True. They are. But they are also pragmatic.

The IE rendering dilemma – solved?

A couple of months a IE rendering dilemma: How to fix IE8’s rendering engine without breaking all the corporate intranets out there? How to create both a standards oriented browser and still ensure that the main customers of Microsoft – the enterprises – can still run a current browser without having to redo all their (mostly internal) web applications.

Only three days after my posting IEBlog talked about IE8 passing the ACID2 test. And when you watch the video linked there, you’ll notice that they indeed kept the IE7 engine untouched and added an additional switch to force IE8 into using the new rendering engine.

And yesterday, A List Apart showed us how it’s going to work.

While I completely understand Microsofts solution and the reasoning behind it, I can’t see any other browser doing what Microsoft recommended as a new standard. The idea to keep multiple rendering engines in the browser and default to outdated ones is in my opinion a bad idea. Download-Sizes of browser increase by much, security problems in browsers must be patched multiple times, and, as the Webkit blog put it, “[..] hurts the hackability of the code [..]”.

As long as the other browser vendors don’t have IE’s market share nor the big company intranets depending on these browsers, I don’t see any reason at all for the other browsers to adapt IE’s model.

Also, when I’m doing (X)HTML/CSS work, usually it works and displays correctly in every browser out there – with the exception of IE’s current engine. As long as browsers don’t have awful bugs all over the place and you are not forced to hack around them, deviating from the standard in the process, there is no way a page you create will only work in one specific version of a browser. Even more so: When it breaks on a future version, that’s a bug in the browser that must be fixed there.

Assuming that Microsoft will, finally, get it right with IE8 and subsequent browser versions, we web developers should be fine with

<meta http-equiv="X-UA-Compatible" content="IE=edge" />

on every page we output to a browser. These compatibility hacks are for people that don’t know what they are doing. We know. We follow standards. And if IE begins to do so as well, we are fine with using the latest version of the rendering engine there is.

If IE doesn’t play well and we need to apply braindead hacks that break when a new version of IE comes out, then we’ll all be glad that we have this method of forcing IE to use a particular engine, thus making sure that our hacks continue to work.

My PSP just got a whole lot more useful

<p>Or useful at all – considering the games that are available to that console. To be honest: Of all the consoles I have owned in my life, the PSP must be the most underused one. I basically own two games for it: Breath of Fire and Tales of Eternia – not only by this choice of titles, but also by reading this blog, you may notice a certain affinity to Japanese Style RPG’s.</p> <p>These are the closest thing to a successor of the classical graphic adventures I started my computer career with, minus hard to solve puzzles plus a much more interesting story (generally). So for my taste, these things are a perfect match.</p> <p>But back to the PSP. It’s an old model – one of the first here in Switzerland. One of the first on the world to be honest: I bought the thing WAAAY back with hopes of seeing many interesting RPG’s – or even just good ports of old classics. Sadly neither really happened.</p> <p>Then, a couple of days ago, I found a usable copy of the game Lumines. Usable in a sense of when the guy in the store told me that there is a sequel out and I told him that I did not intend to actually play the game, he just blinked with one eye and wished me good luck with my endeavor. </p> <p>Or in layman’s terms: That particular version of Lumines had a security flaw allowing one to do a lot of interesting stuff with the PSP. Like installing an older, flawed version of the firmware which in turn allows to completely bypass whatever security the PSP would provide.</p> <p>And now I’m running the latest M33 firmware: 3.71-M4. </p> <p>What does that mean? It means that the former quite useless device has just become the device of my dreams: It runs SNES games. It runs Playstation 1 games. It’s portable. I can use it in bed without a large assembly of cables, gamepads and laptops. It’s instant-on. It’s optimized for console games. It has a really nice digital directional pad (gone are the days of struggling with diagonally-emphasized joypads – try playing Super Metroid with one of these).</p> <p>It plays games like Xenogears, Chrono Cross, Chrono Trigger – it finally allows me to enjoy the RPG’s of old in bed before falling asleep. Or in the bathtub. Or whatever.</p> <p>It’s a real shame that once more I had to resort to legally questionable means to get a particular device to a state I imagine it to be. Why can’t I buy any PS1 game directly from Sony? Why are the games I want to play not even available in Switzerland? Why is it illegal to play the games I want to play? Why are most of the gadgets sold today crippled in a way or another? Why is it illegal to un-cripple our gadgets we bought?</p> <p>Questions I, frankly, don’t want to answer. For years now I wanted a possibility to play Xenogears in bed and while taking a bath. Now I can, so I’m happy. And playing Xenogears. And loving it like when I was playing through that jewel of game history for the first time.</p> <p>If I find time, expect some more in-depth articles about the greatness of Xenogears (just kidding – just read the early articles in this blog) or how to finally get your PSP where you want it to be – there are lots of small things to keep in mind to make it work completely satisfactory.  </p>

More iPod fun

Last time I explained how to get .OGG-feeds to your iPod.

Today I’ll show you one possible direction one could go to greatly increase the usability of non-official (read: not bought at audible.com) audiobooks you may have lying around in .MP3 format.

You see, your iPod threats every MP3-File of your library as music, regardless of length and content. This can be annoying as the iPod (rightly so) forgets the position in the file when you stop playback. So if you return to the file, you’ll have to start from the beginning and seek through the file.

This is a real pain in case of longer audiobooks and / or radio plays of which I have a ton

One way is to convert your audiobooks to AAC and rename the file to .m4b which will convince iTunes to internally tag the files as audiobooks and then enable the additional features (storing the position and providing UI to change play speed).

Of course this would have meant converting a considerable part of my MP3 library to the AAC-format which is not yet as widely deployed (not to speak of the quality-loss I’d have to endure when converting a lossy format into another lossy format).

It dawned me that there’s another way to make the iPod store the position – even with MP3-files: Podcasts.

So the idea was to create a script that reads my MP3-Library and outputs RSS to make iTunes think it’s working with a Podcast.

And thus, audiobook2cast.php was born.

The script is very much tailored to my directory structure and probably won’t work at your end, but I hope it’ll provide you with something to work with.

In the script, I can only point out two interesting points:

  • When checking a podcast, iTunes ignores the type-attribute of the enclosure when determining whether a file can be played or not. So I had to add the fake .mp3-extension.
  • I’m outputting a totally fake pubDate-Element in the <item>-Tag to force iTunes to sort the audiobooks in ascending order.

As I said: This is probably not useful to you out-of-the-box, but it’s certainly an interesting solution to an interesting problem.