AJAX, Architecture, Frameworks and Hacks

Today I was talking with @brainlock about JavaScript, AJAX and Frameworks and about two paradigms that are in use today:

The first is the “traditional” paradigm where your JS code is just glorified view code. This is how AJAX worked in the early days and how people are still using it. Your JS-code intercepts a click somewhere, sends an AJAX request to the server and gets back either more JS code which just gets evaulated (thus giving the server kind of indirect access to the client DOM) or a HTML fragment which gets inserted at the appropriate spot.

This means that your JS code will be ugly (especially the code coming from the server), but it has the advantage that all your view code is right there where all your controllers and your models are: on the server. You see this pattern in use on the 37signals pages or in the github file browser for example.

Keep the file browser in mind as I’m going to use that for an example later on.

The other paradigm is to go the other way around an promote JS to a first-class language. Now you build a framework on the client end and transmit only data (XML or JSON, but mostly JSON these days) from the server to the client. The server just provides a REST API for the data plus serves static HTML files. All the view logic lives only on the client side.

The advantages are that you can organize your client side code much better, for example using backbone, that there’s no expensive view rendering on the server side and that you basically get your third party API for free because the API is the only thing the server provides.

This paradigm is used for the new twitter webpage or in my very own tempalias.com.

Now @brainlock is a heavy proponent of the second paradigm. After being enlightened by the great Crockford, we both love JS and we both worked on huge messes of client-side JS code which has grown over the years and lacks structure and feels like copy pasta sometimes. In our defense: Tons of that code was written in the pre-enlightened age (2004).

I on the other hand see some justification for the first pattern aswell and I wouldn’t throw it away so quickly.

The main reason: It’s more pragmatic, it’s more DRY once you need graceful degradation and arguably, you can reach your goal a bit faster.

Let me explain by looking at the github file browser:

If you have a browser that supoports the HTML5 history API, then a click on a directory will reload the file list via AJAX and at the same time the URL will be updated using push state (so that the current view keeps its absolute URL which is valid even after you open it in a new browser).

If a browser doesn’t support pushState, it will gracefully degrade by just using the traditional link (and reloading the full page).

Let’s map this functionality to the two paradigms.

First the hacky one:

  1. You render the full page with the file list using a server-side template
  2. You intercept clicks to the file list. If it’s a folder:
  3. you request the new file list
  4. the server now renders the file list partial (in rails terms – basically just the file list part) without the rest of the site
  5. the client gets that HTML code and inserts it in place of the current file list
  6. You patch up the url using push state

done. The view code is only on the server. Whether the file list is requested using the AJAX call or the traditional full page load doesn’t matter. The code path is exactly the same. The only difference is that the rest of the page isn’t rendered in case of an AJAX call. You get graceful degradation and no additional work.

Now assuming you want to keep graceful degradation possible and you want to go the JS framework route:

  1. You render the full page with the file list using a server-side template
  2. You intercept the click to the folder in the file list
  3. You request the JSON representation of the target folder
  4. You use that JSON representation to fill a client-side template which is a copy of the server side partial
  5. You insert that HTML at the place where the file list is
  6. You patch up the URL using push state

The amount of steps is the same, but the amount of work isn’t: If you want graceful degradation, then you write the file list template twice: Once as a server-side template, once as a client-side template. Both are quite similar but usually you’ll be forced to use slightly different syntax. If you update one, you have to update the other or the experience will be different whether you click on a link or you open the URL directly.

Also you are duplicating the code which fills that template: On the server side, you use ActiveRecord or whatever other ORM. On the client side, you’d probably use Backbone to do the same thing but now your backend isn’t the database but the JSON response. Now, Backbone is really cool and a huge timesaver, but it’s still more work than not doing it at all.

OK. Then let’s skip graceful degradation and make this a JS only client app (good luck trying to get away with that). Now the view code on the server goes away and you are just left with the model on the server to retrieve the data, with the model on the client (Backbone helps a lot here, but there’s still a substatial amount of code that needs to be written that otherwise wouldn’t) and with the view code on the client.

Now don’t ge me wrong.

I love the idea of promoting JS to a first class language. I love JS frameworks for big JS only applications. I love having a “free”, dogfooded-by-design REST API. I love building cool architectures.

I’m just thinking that at this point it’s so much work doing it right, that the old ways do have their advantages and that we should not condemn them for being hacky. True. They are. But they are also pragmatic.

DNSSEC to fix the SSL mess?

After Firesheep it has become clear that there’s no way around SSL.

But still many people (and I’m including myself) are unhappy with the fact that to roll out SSL, you basically have to pay a sometimes significant premium for the certificate. And that’s not all: You have to pay the same fee every n years (and while you could say that the CA does some work the first time, every following year, it’s plain sucking money from you) and you have to remember to actually do it unless you want embarrassing warnings pop up to your users.

The usual suggestion is to make browsers accept self-signed certificates without complaining, but that doesn’t really work to prevent a Firesheep style attack and is arguably even worse as it would allow not only your session id, but also your password to leak from sites that use the traditional SSL-for-login-HTTP-afterwards mechanism.

See my comment on HackerNews for more details.

To make matters worse, last week news about a CA being compromised and issuing fraudulent (but still trusted) certificates made the rounds, so now even with the current CA based security mechanism, we still can’t completely trust the infrastructure.

Thinking about this, I had an idea.

Let’s assume that one day, one glorious day, DNSSEC will actually be deployed.

If that’s the case, then if I was the owner of gnegg.ch, I could just publish the certificate (or its fingerprint or a link to the certificate over SSL) in the DNS as a TXT record. DNSSEC would ensure that it was the owner of the domain who created the TXT entry and that the domain is the real one and not a faked one.

So if that entry says that gnegg.ch is supposed to serve a certificate with the fingerprint 0xdeadbeef, then a connecting browser would be sure that if the site is serving that certificate (and has the matching private key), then the connection would be secure and not man-in-the-middle’d.

Even better: If I lose the private key of gnegg.ch, I would just update the TXT record, making the old key useless. No non-working CRL or OCSP. Just one additional DNS query.

And you know what? It would put CAs out of business for signing of site certificates as a self-signed certificate would be as good as an official one (they would still be needed to sign your DNSSEC zone file of course, but that could be done by the TLD owners).

Oh and by the way: I could create my certificate with an incredibly long (if ever) expiration time: If I want the certificate to be invalid, I remove or change the TXT record and I’m done. As simple as that. No more embarrassing warnings. No more fear of missing the deadline.

Now, this feels so incredibly simple that there must be something I’m missing. What is it? Is it just that politics is preventing DNSSEC from ever being real? Is there an error in my thinking?

 

Things I can’t do with an iPhone/iPad

  • have a VoIP call going on when a mobile call/SMS arrives
  • read Kindle ebooks (I can now, but knowing Apple’s stance on “competing functionality”, with the advent of iBook, how long do you think this will last?)
  • give it to our customers as another device to use with PopScan (It’s not down-lockable and there’s no way for centralized app deployment that doesn’t go over apple)
  • plug any peripheral that isn’t apple sanctioned
  • plug a peripheral and use it system-wide
  • play a SNES ROM (or any other console rom)
  • install Adblock (which especially hurts on the iPad)
  • consistenly use IM (background notifications don’t work consistently)

The iPhone provides me with many advantages and thus I can live with its inherent restrictions (which are completely arbitrary – there’s no technical reason for them), but I see no point to buy yet another locked-down device that does half of the stuff I’d want it to do and does it half-assed at that.

Also it’s a shame that Apple obviously doesn’t need any corporate customers (at least for a small company, I see no possibility).

I just hope, the open and usable Mac computer remains. I would not know what to go back to? Windows? Never. Linux? Sure. But on what hardware?

JSONP. Compromised in 3…2…1…

To embed a vimeo video on some page, I had a look at their different methods for embedding and the easiest one seemed to be what is basically JSONP – a workaround for the usual restriction of disallowing AJAX over domain boundaries.

But did you know, that JSONP not only works around the subdomain restriction, it basically is one huge cross site scripting exploit and there’s nothing you can do about it?

You might have heard this and you might have found articles like this one thinking that using libraries like that would make you save. But that’s an incorrect assumption. The solution provided in the article has it backwards and only helps to protect the originating site against itself, but it does not help at all to protect the calling site from the remote site.

You see, the idea behind JSONP is that you source the remote script using <script src=”http://remote-service.example.com/script.js”> and the remote script then (after being loaded into your page and thus being part of your page) is supposed to call some callback of the original site (from a browsers standpoint it is part the original site).

The problem is that you do not get control over the loading let alone content of that remote script. Because the cross-domain restrictions prevent you from making an AJAX request to a remote server, you are using the native HTML methods for cross domain requests (which should not have been allowed in the first place) and at that moment you relinquish all control over your site as that remotely loaded script runs in the context of your page, which is how you get around the cross domain restrictions – by loading the remote script into your page and executing it in the context of your page.

Because you never see that script until it is loaded, you cannot control what it can do.

Using JSONP is basically subjecting yourself to an XSS attack by giving the remote end complete control over your page.

And I’m not just talking about malicious remote sites… what if they themselves are vulnerable to some kind of attack? What if they were the target of a successful attack? You can’t know and once you do know it’s too late.

This is why I would recommend you never to rely on JSONP and find other solutions for remote scripting: Use a local proxy that does sanitization (i.e. strict JSON parsing which will save you), rely on cross-domain messaging that was added in later revisions of the upcoming HTML5 standard.

Sense of direction vs. field of view

Last saturday, I bought the Metroid Prime Triloogy for the Wii. I didn’t yet have the Wii Metroid and it’s impossible for me to use the GameCube to play the old games as the distance between my couch and the reciever is too large for the GameCube’s wired joypads. It has been a long while since I last played any of the 3D Metroids, and seeing the box in a store made me want to play them again.

So all in all, this felt like a good deal to me: Getting the third Prime plus the possibility to easily play the older two for the same price that they once asked for the third one alone.

Now I’m in the middle of the first game and I made a really interesting observation: My usually very good sense of direction seems to require a minimum sized field of view to get going: While playing on the GameCube, I was constantly busy looking at the map and felt unable to recognize even the simplest landmarks.

I spent the game in a constant state of feeling lost, not knowing where to go and forgetting how to go back to places where I have seen then unreachable powerups.

Now it might just be that I remember the world from my first playthrough, but this time, playing feels completely differently to me: I constantly know where to go and where I am. Even with rooms that are very similar to each other, I constantly know where I am and how to get from point a to point b.

When I want to re-visit a place, I just go there. No looking at the map. No backtracking.

This is how I usually navigate the real world, so after so many years of feeling lost in 3D games, I’m finally able to find my way in them as well.

Of course I’m asking myself what has changed and in the end it’s either the generally larger screen size of the wide-screen format of the Wii port or maybe the controls via the Wiimote that feel much more natural: The next step for me will be to try and find out which it is by connecting the Wii to a smaller (but still wide) screen.

But aside of all that, Metroid just got even better – not that I believed that to be possible.

Programming languages names

Today in the office, a discussion about the merits of Ruby compared to Python and the other way around (isn’t it fun to have people around actually willing to discuss such issues?) lead into us making fun of different programming languages by interjecting some sore points about them into their names.

The Skype conversation went roughly as follows (I removed some stuff for brevity but all the language names are intact):

thepilif: ja-long variable names and no function pointers-va really sucks
thepilif: though there’s always C(*^~**<<)++
thepilif: and then there’s alyways Del-Access violation at address 02E41C10. Read of address 02E41C10-phi
thepilif: or P-false==true-HP
Coworker: ok so for the sake of it i should add py thon
thepilif: or java-everything is global-script
thepilif: too bad it doesn’t work for C
thepilif: C-sigsegv
thepilif: they know why they just chose one letter
Coworker: exactly, k&r are smart
Coworker: has-how the fuck do i do a print-skell?
Coworker: pe/(^$^)/rl
thepilif: or pe-module? object? hash? what’s the difference-rl
Coworker: so we could say pe/$^/rl
thepilif: and ru-lets rewrite our syntax on the fly-by
Coworker: l(i(s(p)))
thepilif: can’t you wrap this into another pair of ()?
thepilif: (l(i(s(p))))))
Coworker: yes even better
thepilif: and add the syntax error
thepilif: one too many )
Coworker: it’s impossible to match them just by looking
thepilif: totally impossible. yes
Coworker: the human brain is no fucking pushdown automata
Coworker: but maybe the lisp people are
Coworker: vb! vb needs one
thepilif: visual-on error resume next-basic
thepilif: and of course brain-<<<<<******<<<>>>>-fuck
thepilif: c-tries to be dynamic, but var just doesn’t cut it-#
thepilif: c-not quite java nor c(++)?-#
thepilif: though the first one feels better
thepilif: oh.. and of course HT-unknown error-ML
thepilif: as a tribute to IE6
thepilif: and of course la-no bugs but still not usable-tex
thepilif: sorry, Knuth
thepilif: and send-$*$_**^$$$-mail

So the question is: Do you have anything to add? Do you feel that we were overly unfair?

Twisted Tornado

Lately, the net is all busy talking about the new web server released by FriendFeed last week and how their server basically does the same thing as the Twisted framework that was around so much longer. One blog entry ends with

Why Facebook/Friendfeed decided to create a new web server is completely beyond us.

Well. Let me add my two cents. Not from a Python perspective (I’m quite the Python newbie, only having completed one bigger project so far), but from a software development perspective. I feel qualified to add the cents because I’ve been there and done that.

When you start any project, you will be on the lookout for a framework or solution to base your work on. Often times, you already have some kind of idea of how you want to proceed and what the different requirements of your solution will be.

Of course, you’ll be comparing existing requirements against the solutions around, but chances are that none of the existing solutions will match your requirements exactly, so you will be faced with changing them to match.

This involves not only the changes themselves but also other considerations:

  • is it even possible to change an existing solution to match your needs?
  • if the existing solution is an open source project, is there a chance of your changes being accepted upstream (this is not a given, by the way).
  • if not, are you willing to back- and forward-port your changes as new upstream versions get released? Or are you willing to stick with the version for eternity, manually back-porting security-issues?

and most importantly

  • what takes more time: Writing a tailor-made solution from scratch or learning how the most-matching solutions ticks to make it do what you want?

There is a very strong perception around, that too many features mean bloat and that a simpler solution always trumps the complex one.

Have a look at articles like «Clojure 1, PHP 0» which compares a home-grown, tailor-made solution in one language to a complete framework in another and it seems to favor the tailor-made solution because it was more performant and felt much easier to maintain.

The truth is, you can’t have it both ways:

Either you are willing to live with «bloat» and customize an existing solution, adding some features and not using others, or you are unwilling to accept any bloat and you will do a tailor-made solution that may be lacking in features, may reimplement other features of existing solutions, but will contain exactly the features you want. Thus it will not be «bloated».

FriendFeed decided to go the tailor-made route but instead of many other projects each day who go the tailor made route (take Django’s reimplementations of many existing Python technologies like templating and ORM as another example) and keep using that internally, they actually went public.

Not with the intention to bad-mouth Twisted (though it kinda sounded that way due to bad choice of words), but with the intention of telling us: «Hey – here’s the tailor-made implementation which we used to solve our problem – maybe it is or parts of it are useful to you, so go ahead and have a look».

Instead of complaining that reimplementation and a bit of NIH was going on, the community could embrace the offering and try to pick the interesting parts they see fitting for their implementation(s).

This kind of reinventing the wheel is a standard process that is going on all the time, both in the Free Software world as in the commercial software world. There’s no reason to be concerned or alarmed. Instead we should be thankful for the groups that actually manage to put their code out for us to see – in so many cases, we never get a chance to see it and thus lose a chance at making our solutions better.

SMS is dead

BeejiveIM is the first multiprotocol IM application for the iPhone that supports the new background notification features of firmware 3.0. Yesterday I went ahead and bought that application, curious to see how well it would work.

And just now my phone vibrated and on the display, there was an IM message a coworker sent me via Google Talk. The user experience was exactly the same as it would have been with an SMS – well – nearly the same – the phone made a different sound.

So the dream I had many moons ago (6 years – boy – how time flies) has finally come true, with one difference: Whereas back then the MB cost CHF 7, now it’s practically free, considering that I’m unable to actually use up my traffic quota and even then, it’s only CHF 0.10 now.

So let’s keep that in mind and also consider that SMS pricing hasn’t changed in the last six years.

So while IM was 52 times cheaper than SMS back then, now the price advantage ranges from somewhere between 3500 times cheaper and infinity times cheaper.

SMS pricing needs to be looked at. This just cannot be.

PostgreSQL 8.4

Like a clockwork, about one year after the release of PostgreSQL 8.3, the team behind the best database on this world did it again and released PostgreSQL 8.4, the latest and greatest in a long series of awesomeness.

Congratulations to everyone involved and might you have the strength to continue to improve your awesome piece of work.

For me, the hightlights of this new release are

  • parallel restore: I just tried this out and restoring a dump that usually took around 40 minutes (in standard sql/text format) now takes 5 minutes.
  • The improvements to psql usability just make it even clearer that psql isn’t just a command line database tool, but that it’s one of the best interfaces to access the data and administer the server. psql hands-down beats whatever database GUI tool I have seen so far.
  • truncate table reset identity is very useful during development
  • no more max_fsm_pages makes maintaining the database even easier and removes one variable to keep track of.

Thanks again for yet another awesome release.

iPhone works for me

A year ago, I was comparing mobile phones, I bought a Touch Diamond and regretted it and then I bought an iPhone 3G which I used for a year and now I even upgraded to the 3GS. Now that I just got yet another comment to my post about the Touch Diamond, I thought I should recycle that comparison table from a year ago, but this time I’ll compare my assumptions about the iPhone back then with how it actually turned out.

So, here’s the table:

assumed actually
Phone usage
Quick dialing of arbitrary numbers actually, using the favorites list, and even using the touch keypad with its very large buttons, I never had a problem dialing a number.
Acceptable battery life (more than two days) ? meh – two to three days, but as I’m syncing podcasts every day, I get to charge the phone every day as well, so this doesn’t matter as much
usable as modem probably not it is now (using a little help for my Swisscom contract). As I was bound to my old contract with sunrise until may, I would have been able to use my old phone in an emergency, but that thankfully didn’t happen.
usable while not looking at the device I got really dependent upon the small button on my headset plus the volume hardware buttons on the side of the device, both allowing me to do 90% of the stuff I was able to do on the old phone without looking at it.
quick writing of SMS messages actually, I’m nearly as fast as with the T9 – having all keys at my disposal eliminates the need to select the right word in the menus, but not having the physical keys lets me wrestle with typos or auto correction which removes a bit of the advantage. It’s not nearly as bad as I have imagined though.
Sending and receiving of MMS messages works now. I missed the feature about once or twice in the 2.0 days, but usually sending a picture via email worked just as well (and was cheaper).
PIM usage
synchronizes with google calendar/contacts maybe yes. Since the beginning of the year, this works really well because Google just pretends to be Exchange
synchronizes with Outlook maybe yes, directly via ActiveSync – but since February, our company went the Google Apps route, so this has become irrelevant.
usable calendar yes yes
usable todo list
media player usage
integrates into current iTunes based podcast workflow yes yes
straight forward audio playing interface yes yes (see my note about the button on the headset above)
straight forward video playing interface actually, the interface is perfectly fine
acceptable video player limited kinda yes. Using my 8 core Mac Pro, it’s quick and easy to convert a video, but lately I’m using my home cinema equipment for the real movies/tv series and the iPhone for video podcasts which already come in the native format. Still, it’s no generic video player capable of playing video in the most common formats and it doesn’t really support playing from any server in my home network.
hackability
ssh client maybe yes. TouchTerm works very well – much better than any of the mobile Putty variants (Symbian an Winmob)
skype client maybe note quite. Actually usable with the speakerphone or headset, but not as useful in general use due to the inability to run in the background
OperaMini (browser usable on GSM) not needed any more due to UMTS and near-flat rates.
WLAN-Browser yes yes

Nearly all my gripes about the iPhone have either become irrelevant or turned out not to be a problem after all.

Combine the very acceptable performance as a phone with the perfect performance as a podcast player, music player, acceptable gaming platform and perfect mobile internet device, then it becomes clear that the iPhone has become the perfect phone for me.

I upgraded to the 3GS mainly because of the larger capacity, but now that I have it, the speed improvement actually matters much more than the capacity increase as 32 GB still is not enough to fit all my audio books, so I’m still limited to all my music, all unlistened podcasts and a selection of audio books.

But the speed improvement from the 3G to the 3GS is so incredible, that I’m still very happy I made the purchase. All the other features are either not quite ready for prime time (voice control) or not really interesting to me (video recording, compass).

Still. After looking for the perfect phone for 8 years now, I finally found the hardware in the iPhone.