tempalias.com – development diary

After listening to this week’s Security Now! podcast where they were discussing disposeamail.com. That reminded me of this little idea I had back in 2002: Selfdestructing Email Addresses.

Instead of providing a web interface for a catchall alias, my solution was based around the idea of providing a way to encode time based validity information and even an usage counter into an email address and then check that information on reception of the email to decide whether to alias the source address to a target address or whether to decline delivery with an “User unknown” error.

This would allow you to create temporary email aliases which redirect to your real inbox for a short amount of time or amount of emails, but instead of forcing you to visit some third-party web interface, you would get the email right there where the other messages end up in: In your personal inbox.

Of course this old solution had one big problem: It required a mail server on the receiving end and it required you as a possible user to hook the script into that mailserver (also, I never managed to do just that with exim before losing interest, but by now, I would probably know how to do it).

Now. Here comes the web 2.0 variant of the same thing.

tempalias.com (yeah. it was still available. so was .net) will provide you with a web service that will allow you to create a temporary mail address that will redirect to your real address. This temporary alias will be valid only for a certain date range and/or a certain amount of email sent to it. You will be able to freely chose the date range and/or invocation count.

In contrast to the other services out there, the alias will direct to your standard inbox. No ad-filled web interface. No security problems caused by typos and no account registration.

Also, the service will be completely open source, so you will be able to run your own.

My motivation is to learn something new, which is why I am

  • writing this thing in Node.js (also, because a simple REST based webapp and a simple SMTP proxy is just what node.js was invented for)
  • documenting my progress of implementation here (which also hopefully keeps me motivated).

My progress in implementing the service will always be visible to the public on the projects GitHub page:

http://github.com/pilif/tempalias

As you can see, there’s already stuff there. Here’s what I’ve learned about today and what I’ve done today:

  • I learned how to use git submodules
  • I learned a bunch about node.js – how to install it, how it works, how module separation works and how to export stuff from modules.
  • I learned about the Express micro framework (which does exactly what I need here)
    • I learned how request routing works
    • I learned how to configure the framework for my needs (and how that’s done internally)
    • I learned how to play with HTTP status codes and how to access information about the request

What I’ve accomplished code-wise is, considering the huge amount of stuff I had plain no clue about, quite little:

  • I added the web server code that will run the webapp
  • I created a handler that handles a POST-request to /aliases
  • Said handler checks the content type of the request
  • I added a very rudimentary model class for the aliases (and learned how to include and use that)

I still don’t know how I will store the alias information. In a sense, it’s a really simple data model mapping an alias ID to its information, so it’s predestined for the cool key/value stores out there. On the other hand, I want the application to be simple and I don’t feel like adding a key/value store as a huge dependency just for keeping track of 3 values per alias.

Before writing more code, I’ll have to find out how to proceed.

So the next update will probably be about that decision.

No. It’s not «just» strings

On Hacker News, I came across this rant about strings in Ruby 1.9 where a developer was complaining about the new string handling in Ruby. Now, I’m no Ruby developer by even a long shot, but I am really interested in strings and string encoding which is why I posted the following comment which I reprint here as it’s too big to just be a comment:

Rants about strings and character sets that contain words of the following spirit are usually neither correct nor worth of any further thought:

It’s a +String+ for crying out loud! What other language requires you to understand this
level of complexity just to work with strings?!

Clearly the author lives in his ivory tower of English language environments where he is able to use the word “just” right next to “strings” and he probably also can say that he “switched to UTF-8” without actually really having done so because the parts of UTF-8 he uses work exactly the same as the ASCII he used before.

But the rest of the world works differently.

Data can appear in all kinds of encodings and can be required to be in different other kinds of encodings. Some of those can be converted into each other, others can’t.

Some Japanese encodings (Ruby’s creator is Japanese) can’t be converted to a unicode representation for example.

Nowadays, as a programming language, you have three options of handling strings:

1) pretend they are bytes.

This is what older languages have done and what Ruby 1.8 does. This of course means that your application has to keep track of encodings. Basically for every string you keep in your application, you need to also keep track what it is encoded in. When concatenating a string of encoding a to another string you already have that is in encoding b, you must do the conversion manually.

Additionally, because strings are bytes and the programming language doesn’t care about encoding, you basically can’t use any of the built-in string handling routines because they assume each byte representing one character.

Of course, if you are one of these lucky english UTF-8 users, getting data in ASCII and english text in UTF-8, you can easily “switch” your application to UTF-8 by still pretending strings to be bytes because, well, they are. For all intents and purposes, your UTF-8 is just ASCII called UTF-8.

This is what the author of the linked post wanted.

2) use an internal unicode representation

This is what Python 3 does and what I feel to be a very elegant solution if it works for you: A String is just a collection of Unicode code points. Strings don’t worry about encoding. String operations don’t worry about it. Only I/O worries about encoding. So whenever you get data from the outside, you need to know what encoding it is in and then you decode it to convert it to a string. Conversely, whenever you want to actually output one of these strings, you need to know in what encoding you need the data and then encode that sequence of Unicode code points to any of these encodings.

You will never be able to convert a bunch of bytes into a string or vice versa without going through some explicit encoding/decoding.

This of course has some overhead associated with it, as you always have to do the encoding and because operations on that internal collection of unicode code points might be slower than the simple array-of-byte-based approach, especially if you are using some kind of variable-length encoding (which you probably are to save memory).

Interestingly, whenever you receive data in an encoding that cannot be represented with Unicode code points and whenever you need to send out data in that encoding, then, you are screwed.

This is a defficiency in the Unicode standard. Unicode was specifically made so that it can be used to represent every encoding, but it turns out that it can’t correctly represent some Japanese encodings.

3) The third option is to store an encoding with each string and expose both the strings contents and the encoding to your users

This is what Ruby 1.9 does. It combines methods 1 and 2: It allows you to chose whatever internal encoding you need, it allows you to convert from one encoding to the other and it removes the need to externally keep book of every strings encoding because it does that for you. It also makes sure that you don’t intermix encodings, but I’m getting ahead of myself.

You can still use the languages string library functions because they are aware of the encoding and usually do the right thing (minus, of course, bugs)

As this method is independent of the (broken?) Unicode standard, you would never get into the situation where just reading data in some encoding makes you unable to write the same data back in the same encoding as in this case, you would just create a string using this problematic encoding and do your stuff on that.

Nothing prevents the author of the linked post to use Ruby 1.9’s facility to do exactly what Python 3 does (of course, again, ignoring the Unicode issue) by internally keeping all strings in, say, UTF-16 (you can’t keep strings in “Unicode” – Unicode is no encoding – but that’s for another post). You would transcode all incoming and outgoing data to and from that encoding. You would do all string operations on that application-internal representation.

A language throwing an exception when you concatenate a Latin 1-String to a UTF-8 string is a good thing! You see: Once that concatenation happened by accident, it’s really hard to detect and fix.

At least it’s fixable though because not every Latin1-String is also a UTF-8 string. But if it so happens that you concatenate, say Latin1 and Latin8 by accident, then you are really screwed and there’s no way to find out where Latin1 ends and Latin8 begins as every valid Latin 1 string is also a valid Latin 8 string. Both are arrays of bytes with values between 0 and 255 (minus some holes).

In todays small world, you want that exception to be thrown.

In conclusion, what I find really amazing about this complicated problem of character encoding is the fact that nobody feels it’s complicated because it usually just works – especially method 1 described above that has constantly been used in years past and also is very convenient to work with.

Also, it still works.

Until your application leaves your country and gets used in countries where people don’t speak ASCII (or Latin1). Then all these interesting problems arise.

Until then, you are annoyed by every of the methods I described but method 1.

Then, you will understand what great service Python 3 has done for you and you’ll switch to Python 3 which has very clear rules and seems to work for you.

And then you’ll have to deal with the japanese encoding problem and you’ll have to use binary bytes all over the place and have to stop using strings altogether because just reading input data destroys it.

And then you might finally see the light and begin to care for the seemingly complicated method 3.

</span>

Sticking to the iPhone

Recently, I got a chance to play around with a Nexus One phone and I was using it as my main phone with the intent to use it as my new main phone. I had enough of the lack of background apps and the closedness of the iPhone, so I thought, I should really go through with this.

Unfortunately though, this didn’t work out so well.

People who haven’t tried both devices would probably never understand this, but the Nexus One touch screen is really, really bad. The bit of squigglyness you see on the picture in the linked article seems like no big deal, but after one week of Nexus One and then going back to the iPhone, you can’t imagine how smooth it feels to use the iPhone again.

It’s like being in a very noisy environment and then stepping back into a quiet one.

Why did I try the iPhone again?

While I got Podcast listening to work correctly on the Android phone, I noticed that a lot of my commuting time is not just spent by listening to podcasts, but that some games (currently Doodle Jump and Plants vs. Zombies) play a huge role too and the supply of games on the Android plattform is really, really bad.

And don’t get me started on the keyboard: Neither the built-in one nor the one I had switched to even comes close to what the iPhone provides. I’m about 5 times as fast on the iPhone than on the Android. Worse: After switching to the Nexus One, I again began dreading having to write SMSes which usually spells death to any phone for me.

Speaking of keyboard: The built-in one is completely unusable for multilingual people: The text I write on a phone is about 50% english and 50% german. The Android keyboard doesn’t allow switching the language on the fly (while the english and german keyboards are quite alike, the keyboard language also determines the auto correction language), and it couples the keyboard language to the phone UI language.

This is really bad, as over the years I bacame so accustomed to english UIs that I frankly cannot work with german UIs any more – also because of the usually really bad translations. Eek.

So, let’s tally.

iPhone Android Device
Advantages
  • Working touch screen
  • Smoother graphics and thus more fluent usage.
  • Never crashes
  • Apps I learned to depend on are available (Wemlin, Doodle Jump […])
  • No background noise in the headphones
  • Background-Applications (I wanted this for working IM as the notification based solutions on the iPhone never seemed to work)
  • Built-in applications can be replaced at will
  • Ability to buzz pictures (yeah. I know. Who needs this?)
  • On-the-fly podcast download.
Disadvantages
  • Can’t replace internal apps by better ones
  • Needs iTunes to download podcasts
  • No background apps
  • No buzzing of pictures (at least not if you want a location attached to your buzz)
  • Really bad touch screen (jumpy, inaccurate, sometimes losing calibration until I reboot it)
  • Very mediocre applications available
  • UI sometimes slow
  • Very bad battery life (doesn’t make it through one day even when not heavily used)
  • Crashes about once a day
  • Did I already write “really bad touch screen” – I guess I did, but: “really bad touch screen”
  • Sometimes really bad, sometimes just bad background noise in the headphones. According to HTC, this can be fixed by periodically turning off the phone and removing the battery(!).
  • No audible support (I know I could probably remove the DRM, but why bother at the moment?)

While I thought I could live with the touch screen, the moment I turned on the iPhone again to play a round of “Plants vs. Zombies” that just came out for the i-Devices, I’ve seen how a touch screen is supposed to work and I could not bring myself around to going back, but I still wanted some of the one big iPhone disadvantage, which is lack of non-SMS-based messaging fixed for me, so here’s what I’ve done:

  • WhatsApp on the iPhone works really well as an SMS replacement (something I was after for a very long time)
  • meebo so far never disconnected me on the iPhone which is something all other iPhone IM clients have done for me – and even on the android, meebo tended to disconnect and not reconnect.

For me, that’s it. No more experiments. What ever I tried to get away from Apple’s dictate, it always failed. The N900 is a geeks heaven but doesn’t support my expensive in-ear iPhone headset and doesn’t provide any halfway interesting games. Android has a bad touchscreen, next to no battery life, is slow and crashy.

It’s really hard to admit for me as a geek and strong believer in freedom to use something I bought for whatever purpose I want to use it for, but Apple, even after two years, still rules the phone market in usability and hardware build quality.

Can’t wait to see what the next iteration of the iPhone will be, though they don’t have to change anything as long as their competition still thinks it’s ok to save $2 on each phone by using a crappy touchscreen and a crappy battery.

Sprite avatars in Gravatar

After the release of Google Buzz, my Google profile which I had for years finally became somewhat useful. Seeing that I really liked the avatar I’ve added to that profile, I decided, that Frog should henceforth be my official avatar.

This also meant that I wanted to add Frog to my Gravatar profile which, unfortunately proved to be… let’s say interesting.

The image resizer Gravatar provides on their site to fit the uploaded image to the sites need apparently was not designed for sprites as it tries to blow up sprites way out of proportion only to resize them back down. At first I though I could get away with cheating by uploading above picture with a huge margin added to it, but that only lead to a JavaScript error in their uploader.

In the end, this is what I have done:

  1. Convert the picture into the TGA format
  2. Scale it using hq3x (explanation of hq3x)
  3. Convert it back to png and re-add transparency (hq3x had trouble with transparency in the TGA file)
  4. Scale it to 128 pixels in height
  5. paste it into a pre-prepared 128×128 canvas
  6. upload that.

This is how my gravatar looks now, which feels quite acceptable to me:

My Gravatar

The one in google’s profile was way easier to create: Paste the original image into a 64 by 64 canvas and let google do the resizing. It’s not as perfect as the hq3x algorithm, but that suffers by the downsizing to make frog fit 128 pixels in height anyways.

The other option would be to scale using hq2x and the paste that into a 128 by 128 canvas yielding this sharper, but smaller image:

But what ever I do, frog will still be resized by Gravatar (and thus destroyed), so I went with the image that contains more colored pixels at the expense of a bit of sharpness.

Google Buzz, Android and Google Apps Accounts

I was looking at the Google Android Maps Application that is now providing integrated Google Buzz support, showing buzzes directly on the map and allowing you to buzz (around where I live and work, there has been a tremendous uptake of Google Buzz which makes this really compelling).

However, there’s a little peculiarity about the Android maps application: If your main Google Account you configured (that’s the first one you configure) on the phone is a Google Apps account, Maps will use that for buzz-support (apparently, there’s already some kind of infrastructure for inter-company Buzzing in place). This means that you would only see buzzes from other people in your domain and, because there’s no official support for this out there, only if they are also using an Android phone.

“Mittelpraktisch” as I would say in German.

The obvious workaround is to configure your private gmail account to be your primary account (this is only possible by factory-resetting your device by the way), but this has some disadvantages, mainly the fact that the calendar on the Android  phones only supports syncing with the primary account and as it happens, usually it’s the work-calendar (the Apps one) you want synchronized; not the private one (that lingers unused in my case).

To work around this issue, share your work calendar with your private Google account.

Unfortunately, I couldn’t do that as I’m posting this, because the default in the domain configuration is to not allow this. Thankfully, I’m that domain’s administrator, so I could change it (small company. remember.), but it seems to take a while to propagate into the calendar account.

I’ll post more as my investigation turns out more, though it is my gut feeling that this mess will solve itself as Google fixes their Maps application to not use that phantom corporate buzz account.

PHP 5.3 and friends on Karmic

I have been patient. For months I hoped that Ubuntu would sooner or later get PHP 5.3, a release I’m very much looking forward to, mainly because of the addition of anonymous inner functions to spell the death of create_function or even eval.

We didn’t get 5.3 for Karmic and who knows about Lucid even (it’s crazy that nearly one year after the release of 5.3, there is still debate on whether to include it in the next version of Ubuntu that will be the current LTS release for the next four years. This is IMHO quite the disservice against PHP 5.3 adoption).

Anyways: We are in the process of releasing a huge update to PopScan that is heavily focussed on getting rid of cruft, increasing speed all over the place and increasing overall code quality. Especially the last part could benefit from having 5.3 and seeing that at this point PopScan already runs well on 5.3, I really wanted to upgrade.

In comes Al-Ubuntu-be, a coworker of mine and his awesome Debian packaging skills: Where there are already a few PPAs out there that contain a 5.3 package, Albe went the extra step and added not only PHP 5.3 but quite many other packages we depend upon that might also be useful to my readers. Packages like APC, memcache, imagick and xdebug for development.

While we can make no guarantees that these packages will be maintained heavily, they will get some security update treatment (though highly likely by version bumping as opposed to backporting).

So. If you are on Karmic (and later Lucid if it won’t get 5.3) and want to run PHP 5.3 with APC and Memcache, head over to Albe’s PPA.

Also, I’d like to take the opportunity to thank Albe for his efforts: Having a PPA with real .deb packages as opposed to just my self-compiled mess I would have done gives us a much nicer way of updating existing installations to 5.3 and even a much nicer path back to the original packages once they come out. Thanks a lot.

Things I can’t do with an iPhone/iPad

  • have a VoIP call going on when a mobile call/SMS arrives
  • read Kindle ebooks (I can now, but knowing Apple’s stance on “competing functionality”, with the advent of iBook, how long do you think this will last?)
  • give it to our customers as another device to use with PopScan (It’s not down-lockable and there’s no way for centralized app deployment that doesn’t go over apple)
  • plug any peripheral that isn’t apple sanctioned
  • plug a peripheral and use it system-wide
  • play a SNES ROM (or any other console rom)
  • install Adblock (which especially hurts on the iPad)
  • consistenly use IM (background notifications don’t work consistently)

The iPhone provides me with many advantages and thus I can live with its inherent restrictions (which are completely arbitrary – there’s no technical reason for them), but I see no point to buy yet another locked-down device that does half of the stuff I’d want it to do and does it half-assed at that.

Also it’s a shame that Apple obviously doesn’t need any corporate customers (at least for a small company, I see no possibility).

I just hope, the open and usable Mac computer remains. I would not know what to go back to? Windows? Never. Linux? Sure. But on what hardware?

How we use git

the following article was a comment I made on Hacker News, but as it’s quite big and as I want to keep my stuff at a central place, I’m hereby reposting it and adding a bit of formating and shameless self-promotion (i.e. links):

My company is working on a – by now – quite large web application. Initially (2004), I began with CVS and then moved to SVN and in the second half of last year, to git (after a one-year period of personal use of git-svn).

We deploy the application for our customers – sometimes to our own servers (both self-hosted and in the cloud) and sometimes to their machines.

Until middle year, as a consequence of SVN’s really crappy handling of branches (it can branch, but it fails at merging), we did very incremental development, adding features on customer requests and bugfixes as needed, often times uploading specific fixes to different sites, committing them to trunk, but rarely ever updating existing applications to trunk to keep them stable.

Huge mess.

With the switch to git, we also initiated a real release management, doing one feature release every six months and keeping the released versions on strict maintenance (for all intents and purposes – the web application is highly customizable and we do make exceptions in the customized parts as to react to immediate feature-wishes of clients).

What we are doing git-wise is the reverse of what the article shows: Bug-fixes are (usually) done on the release-branches, while all feature development (except of these customizations) is done on the main branch (we just use the git default name “master”).

We branch off of master when another release date nears and then tag a specific revision of that branch as the “official” release.

There is a central gitosis repository which contains what is the “official” repository, but every one of us (4 people working on this – so we’re small compared to other projects I guess) has their own gitorious clone which we heavily use for code-sharing and code review (“hey – look at this feature I’ve done here: Pull branch foobar from my gitorious repo to see…”).

With this strict policy of (for all intents and purposes) “fixes only” and especially “no schema changes”, we can even auto-update customer installations to the head of their respective release-branches which keeps their installations bug-free. This is a huge advantage over the mess we had before.

Now. As master develops and bug-fixes usually happen on the branch(es), how do we integrate them back into the mainline?

This is where the concept of the “Friday merge” comes in.

On Friday, my coworker or I usually merge all changes in the release-branches upwards until they reach master. Because it’s only a week worth of code, conflicts rarely happen and if they do, we remember what the issue was.

If we do a commit on a branch that doesn’t make sense on master because master has sufficiently changed or a better fix for the problem is in master, then we mark these with [DONTMERGE] in the commit message and revert them as part of the merge commit.

On the other hand, in case we come across a bug during development on master and we see how it would affect production systems badly (like a security flaw – not that they happen often) and if we have already devised a simple fix that is save to apply to the branch(es), we fix those on master and then cherry-pick them on the branches.

This concept of course heavily depends upon clean patches, which is another feature git excels at: Using features like interactive rebase and interactive add, we can actually create commits that

  • Either do whitespace or functional changes. Never both.
  • Only touch the lines absolutely necessary for any specific feature or bug
  • Do one thing and only one.
  • Contain a very detailed commit message explaining exactly what the change encompasses.

This on the other hand, allows me to create extremely clean (and exhaustive) change logs and NEWS file entries.

Now some of these policies about commits were a bit painful to actually make everyone adhere to, but over time, I was able to convince everybody of the huge advantage clean commits provide even though it may take some time to get them into shape (also, you gain that time back once you have to do some blame-ing or other history digging).

Using branches with only bug-fixes and auto-deploying them, we can increase the quality of customer installations and using the concept of a “Friday merge”, we make sure all bug-fixes end up in the development tree without each developer having to spend an awful long time to manually merge or without ending up in merge-hell where branches and master have diverged too much.

The addition of gitorious for easy exchange of half-baked features to make it easier to talk about code before it gets “official” helped to increase the code quality further.

git was a tremendous help with this and I would never in my life want to go back to the dark days.

I hope this additional insight might be helpful for somebody still thinking that SVN is probably enough.

linktrail – a failed startup – introduction

I guess it’s inevitable. Good ideas may fail. And good ideas may be years ahead of their time. And of course, sometimes, people just don’t listen.

But one never stops learning.

In the year 2000, I took part in a plan of a couple of guys to become the next Yahoo (Google wasn’t quite there yet back then), or, to use the words we used on the site,

For these reasons, we have designed an online environment that offers a truly new way for people to store, manage and share their favourite online resources and enables them to engage in long-lasting relationships of collaboration and trust with other users.

The idea behind the project, called linktrail, was basically what would much later on be picked up by the likes of twitter, facebook (to some extent) and the various community based news sites.

The whole thing went down the drain, but the good thing is that I was able to legally salvage the source code, the install it on a personal server of mine and to publish the source code. And now that so many years have passed, it’s probably time to tell the world about this, which is why I have decided to start this little series about the project. What is it? How was it made? And most importantly: Why did it fail? And concequently: What could we have done better?

But let’s first start with the basics.

As I said, I was able to legally acquire the database and code (which is mostly written by me anyways) and to install the site on a server of mine, so let’s get that out to start with. The site is available at linktrail.pilif.ch. What you see running there is the result of 6 months of programming by myself after a concept done by the guys I’ve worked with to create this.

What is linktrail?

If the tour we made back then is any good, then just taking it would probably be enough, but let me phrase in my words: The site is a collection of so called trails which in turn are small units, comparable to blogs, consisting of links, titles and descriptions. These micro-blogs are shown in a popup window (that’s what we had back then) beside the browser window to allow quick navigation between the different links in the trail.

Trails are made by users, either by each user on their own or as a collaborative work between multiple users. The owner of a trail can hand out permissions to everybody or their friends (using a system quite similar to what we currently see on facebook for example)

A trail is placed in a directory of trails which was built around the directory structures we used back then, though by now, we would probably do this much more different. Users can subscribe to trails they are interested in. In that case, they will be notified if a trail they are subscribed to is updated either by the owner or anybody else with the rights to update the trail.

Every user (called expert in the site’s terms) has their profile page (here’s mine) that lists the trails they created and the ones they are subscribed to.

The idea was for you as an user to find others with similar interests and form a community around those interests to collaborate on trails. An in-site messaging-system helped users to communicate with each other: Aside of just sending plain text messages, it’s possible to recommend trails (for easy one-click subscription) .

linktrail was my first real programming project, basically 6 months after graduating in what the US would call high school. Combine that fact with the fact that it was created during the high times of the browser wars (year 2000, remember)  with web standards basically non-existing, then you can imagine what a mess is running behind the scenes.

Still, the site works fine within those constraints.

In future posts, I will talk about the history of the project, about the technology behind the site, about special features and, of course, about why this all failed and what I would do differently – both in matters of code and organization.

If I woke your interest, feel free to have a look at the code of the site which I just now converted from CVS (I started using CVS about 4 months into development, so the first commit is HUGE) to SVN to git and put it up on github for public consumption. It’s licensed under a BSD license, but I doubt that you’d find anything in this mess of PHP3(!) code (though it runs unchanged(!) on PHP5 – topic of another post I guess), HTML 3.2(!) tag soup and java-script hacks.

Oh and if you can read german, I have also converted the CVS repository that contained the concept papers that were written over the time.

In preparation of this series of blog-posts, I have already made some changes to the code base (available at github):

  • login after register now works
  • warning about unencrypted(!) passwords in the registration form
  • registering requires you to solve a reCAPTCHA.

JSONP. Compromised in 3…2…1…

To embed a vimeo video on some page, I had a look at their different methods for embedding and the easiest one seemed to be what is basically JSONP – a workaround for the usual restriction of disallowing AJAX over domain boundaries.

But did you know, that JSONP not only works around the subdomain restriction, it basically is one huge cross site scripting exploit and there’s nothing you can do about it?

You might have heard this and you might have found articles like this one thinking that using libraries like that would make you save. But that’s an incorrect assumption. The solution provided in the article has it backwards and only helps to protect the originating site against itself, but it does not help at all to protect the calling site from the remote site.

You see, the idea behind JSONP is that you source the remote script using <script src=”http://remote-service.example.com/script.js”> and the remote script then (after being loaded into your page and thus being part of your page) is supposed to call some callback of the original site (from a browsers standpoint it is part the original site).

The problem is that you do not get control over the loading let alone content of that remote script. Because the cross-domain restrictions prevent you from making an AJAX request to a remote server, you are using the native HTML methods for cross domain requests (which should not have been allowed in the first place) and at that moment you relinquish all control over your site as that remotely loaded script runs in the context of your page, which is how you get around the cross domain restrictions – by loading the remote script into your page and executing it in the context of your page.

Because you never see that script until it is loaded, you cannot control what it can do.

Using JSONP is basically subjecting yourself to an XSS attack by giving the remote end complete control over your page.

And I’m not just talking about malicious remote sites… what if they themselves are vulnerable to some kind of attack? What if they were the target of a successful attack? You can’t know and once you do know it’s too late.

This is why I would recommend you never to rely on JSONP and find other solutions for remote scripting: Use a local proxy that does sanitization (i.e. strict JSON parsing which will save you), rely on cross-domain messaging that was added in later revisions of the upcoming HTML5 standard.