Technology driven life changes

Last year, when I talked about finally seeing the Apple Watch becoming mildly useful, I had no idea what kind of a ride I was going to be on.

Generally, I’m not really concerned about my health nor fitness, but last September, when my wonderful girlfriend left for a year of study in England, I decided that I finally had enough and I wanted to lose weight.

Having a year of near-zero social obligations would totally allow me to adjust my life-style in a way that’s conducive to weight loss, so here’s what I started doing:

  • During weekdays, I greatly reduced my calorie intake to basically just a salad and a piece of bread every day (you can pry my bread from my cold dead hands – it’s the one food I think I like the most).
  • Every day, no matter the weather, no matter the workload, no matter what, I was going to walk home after my work-day in the office, or on weekends, I would just take an equivalent walk.
  • Every day, I wanted to fill the “Activity” and the “Exercise” rings on my apple watch.

Now walking home sounds like nothing special, but I’m privileged to live in Zürich Switzerland, which means that I have very easy access to forests to walk in.

So commuting home by foot meant that I could walk at least 8 kilometres (4.9 miles), climbing 330m (1082 feet), most of it through the forest.

Every day, no matter whether it was way too hot, way too cold, whether it was raining, hailing or snowing, I would walk home. And every day I would be using my Apple Watch to track what I would generously call a “Workout” (even though it was just walking – but if you go from zero sports to that, I guess it’s ok to call it that).

From September to December I started gradually increasing the distance I walk.

This is the other great thing about Zürich: Once you reach the forest (which you do by walking 20 minutes in practically any direction), you can stay in the forest for hours and hours.

First I extended the 8km walk to 10km, then 12, then 14 and finally 19 (11 miles).

During that time, I kept tracking all the vital signs I could track between the Apple Watch and a Withings scale I bought 1-2 months into this.

My walks got faster and my heart rate at rest got lower and lower, from 80 to now 60.

Every evening after the walk, I would look at the achievements handed out by my Watch which is also why I’ve never updated my movement goals in the Activity app because getting all these badges, honestly, was a lot of fun and very motivational. Every evening I would get notified of increasing my movement streak, of doubling or even tripling my movement goal and of tripling or quadrupling my exercise goal.

screenshot of the activity app

Every morning I would weigh myself and bask in the glory of the ever falling graph painted by the (back then very good) iOS app that came with the scale. I would manage to lose a very consistent 2kg (4.4lbs) per week.

Every walk I would have the chance to experience some of natures beauty.

Crazy sunsets

sunset

Beautiful sunrises

sunrise

Enchanted forests

wintery forest

And frozen creeks

frozen creek

And in spring I could watch trees grow.

When I got home after up to three hours of walk, I was dead tired at around 10pm, meaning that for the first time in ages I would get more than enough sleep and I would still be able to get up between 6 and 7.

By mid of March, after 6 months of a very strict diet and walking home every day, I was done. I had lost 40kg (88.1 lbs).

Now the challenge shifted from losing weight to not gaining weight. I decided to make the diet less strict but also continue with my walks, though I would not do the regular 19km ones any more as they would just take too long (3 hours).

But by June, I really started to notice a change: I wouldn’t feel these walks at all any more. No sweat, no reasonable change in heart rate while on them, no tiredness. The walks really felt like a waste of time.

So I started running.

I never liked running. I was always bad at it. All the way through school where I was the slowest and always felt really bad afterward, through my life until now where I just never did it. Running felt bad and I hated it.

But now things were different.

The first time I changed from walking to running, I did so after reaching the peak altitude, so it was mostly straight and a little bit downhill. But still: I ran 4km (2.48 miles) and when I got home I didn’t feel much more tired.

I was very surprised because running 4km through all of my life would have been completely unthinkable to me, but there I was. I just did.

So next day, I decided to run most of the way, just skipping the steepest parts. Suddenly, there I was, running 8km (4.9 miles), still not feeling particularly tired afterwards.

So I started tracking these runs (using both Runkeeper and Strava for technical reasons – but that’s another post), seeing improvement in my time all the way through July.

And then, on August 1st, I ran half a Marathon climbing 612m (2007 feet)

Screen Shot 2017-08-04 at 11.40.28.png

Considering that this was my first, it’s not even in too bad a time and what’s even more fun to me: I didn’t even feel too tired afterwards and I totally felt like I could run even farther.

So I guess after taking it very slowly and moving from walking a bit to walking more to walking a lot to running a bit to running some more, even I, the most unathletic person possible can push myself into shape.

But what’s the most interesting aspect in all of this is that without technology, the Apple Watch in particular, without the cheesy achievements, none of this would ever have been possible. I hated sports and I’m honestly still not really interested. But the prospect of getting awarded some stupid batches every day is what finally pushed me.

And now, in only a single month, my girlfriend will finally return to Switzerland and I guess she’ll find me in better shape than she’s ever seen me before in our lives. I hope that the prospect of collecting some more batches from my watch will keep me going even when the social pressure might want to tempt me into skipping a workout.

Another platform change

If you can read this, then it has happened – this blog moved again.

Eons ago in internet time, this project has started wich lasted for 4 years at which point I got spammed so badly that I had to move away.

So, still ages ago, I moved to Serendipity which fixed the spam issue for me.

This lastes only two years before I moved againto WordPress this time – the nicer admin tool and the richer theme selection pushed me over the edge.

While I’m still happy with WordPress in general, over time I learned a few things:

  • while running your own server is fun, having it compromised is not. Using
    any well-known blogging engine that relies on server-side generation of
    content is ultimately a way to get compromised unless you constantly patch
    security issues, taking up a lot of time in the process.

  • While the old name of this blog (gnegg) was a cool pun for people who knew
    me, it didn’t at all convey my identity on the internet. Me? I’m
    pilif, so this should be conveyed at least by the URL

  • Most of my posts very relying heavily on custom markup, making the WP
    WYSIWYG editor more annoying than useful.

So when @rmurphey blogged about octopress I immediately recognized the huge opportunity Octopress provides:

I can host static files on a server I don’t own and thus don’t have to care about compromising, I can blog using my favorite tools (any text editor and git) and I still get an acceptable layout.

So here we are – at the end of yet another conversion.

While the old URLs already 301 redirect, pictures are still missing and I’ll work on getting them back. The comments I’ll try to port over too the moment I see how disqus handles my WordPress export.

The gnegg branding is gone and has been replaced by something that doesn’t look like a name but isn’t while still being a fun pun for people who know me. The tagline, of course, stays the same.

So.

Welcome to my new home and let’s hope this lasts as long as the previous instances!

AJAX, Architecture, Frameworks and Hacks

Today I was talking with @brainlock about JavaScript, AJAX and Frameworks and about two paradigms that are in use today:

The first is the “traditional” paradigm where your JS code is just glorified view code. This is how AJAX worked in the early days and how people are still using it. Your JS-code intercepts a click somewhere, sends an AJAX request to the server and gets back either more JS code which just gets evaulated (thus giving the server kind of indirect access to the client DOM) or a HTML fragment which gets inserted at the appropriate spot.

This means that your JS code will be ugly (especially the code coming from the server), but it has the advantage that all your view code is right there where all your controllers and your models are: on the server. You see this pattern in use on the 37signals pages or in the github file browser for example.

Keep the file browser in mind as I’m going to use that for an example later on.

The other paradigm is to go the other way around an promote JS to a first-class language. Now you build a framework on the client end and transmit only data (XML or JSON, but mostly JSON these days) from the server to the client. The server just provides a REST API for the data plus serves static HTML files. All the view logic lives only on the client side.

The advantages are that you can organize your client side code much better, for example using backbone, that there’s no expensive view rendering on the server side and that you basically get your third party API for free because the API is the only thing the server provides.

This paradigm is used for the new twitter webpage or in my very own tempalias.com.

Now @brainlock is a heavy proponent of the second paradigm. After being enlightened by the great Crockford, we both love JS and we both worked on huge messes of client-side JS code which has grown over the years and lacks structure and feels like copy pasta sometimes. In our defense: Tons of that code was written in the pre-enlightened age (2004).

I on the other hand see some justification for the first pattern aswell and I wouldn’t throw it away so quickly.

The main reason: It’s more pragmatic, it’s more DRY once you need graceful degradation and arguably, you can reach your goal a bit faster.

Let me explain by looking at the github file browser:

If you have a browser that supoports the HTML5 history API, then a click on a directory will reload the file list via AJAX and at the same time the URL will be updated using push state (so that the current view keeps its absolute URL which is valid even after you open it in a new browser).

If a browser doesn’t support pushState, it will gracefully degrade by just using the traditional link (and reloading the full page).

Let’s map this functionality to the two paradigms.

First the hacky one:

  1. You render the full page with the file list using a server-side template
  2. You intercept clicks to the file list. If it’s a folder:
  3. you request the new file list
  4. the server now renders the file list partial (in rails terms – basically just the file list part) without the rest of the site
  5. the client gets that HTML code and inserts it in place of the current file list
  6. You patch up the url using push state

done. The view code is only on the server. Whether the file list is requested using the AJAX call or the traditional full page load doesn’t matter. The code path is exactly the same. The only difference is that the rest of the page isn’t rendered in case of an AJAX call. You get graceful degradation and no additional work.

Now assuming you want to keep graceful degradation possible and you want to go the JS framework route:

  1. You render the full page with the file list using a server-side template
  2. You intercept the click to the folder in the file list
  3. You request the JSON representation of the target folder
  4. You use that JSON representation to fill a client-side template which is a copy of the server side partial
  5. You insert that HTML at the place where the file list is
  6. You patch up the URL using push state

The amount of steps is the same, but the amount of work isn’t: If you want graceful degradation, then you write the file list template twice: Once as a server-side template, once as a client-side template. Both are quite similar but usually you’ll be forced to use slightly different syntax. If you update one, you have to update the other or the experience will be different whether you click on a link or you open the URL directly.

Also you are duplicating the code which fills that template: On the server side, you use ActiveRecord or whatever other ORM. On the client side, you’d probably use Backbone to do the same thing but now your backend isn’t the database but the JSON response. Now, Backbone is really cool and a huge timesaver, but it’s still more work than not doing it at all.

OK. Then let’s skip graceful degradation and make this a JS only client app (good luck trying to get away with that). Now the view code on the server goes away and you are just left with the model on the server to retrieve the data, with the model on the client (Backbone helps a lot here, but there’s still a substatial amount of code that needs to be written that otherwise wouldn’t) and with the view code on the client.

Now don’t ge me wrong.

I love the idea of promoting JS to a first class language. I love JS frameworks for big JS only applications. I love having a “free”, dogfooded-by-design REST API. I love building cool architectures.

I’m just thinking that at this point it’s so much work doing it right, that the old ways do have their advantages and that we should not condemn them for being hacky. True. They are. But they are also pragmatic.

Skype over 3G – calculations

With the availability of an iPhone Skype client with background availability, I wanted to find out, how much it would cost me if I move calls from the regular mobile network over to Skype over 3G.

Doing VoIP over 3G is a non-issue from a political standpoint here in Switzerland as there are no unlimited data plans available and the metered plans are expensive enough for the cellphone companies to actually be able to make money with, so there’s no blocking or anything else going on over here.

To see how much data is accumulated, I hold a phone conversation with my girlfriend that lasted exactly 2 minutes in which time, we both talked. Before doing so, I reset the traffic counter on my phone and immediately afterwards I checked.

After two minutes, I sent out 652 KB of data and received 798 KB.

This is equal to around 750 KB/minute (being conservative here and gracefully rounding up).

My subscription comes with 250 MB of data per month. Once that’s used, there’s a flat rate of CHF 5 per day I’m using any data, so I really should not go beyond the 250MB.

As I’m not watching video (or audio) over 3G, my data usage is otherwise quite low – around 50MB.

That leaves 200MB unused.

With 750KB/minute, this equals 4.4 hours of free Skype conversation. Which is something I would never ever reach wich means that at least with skype-enabled people, I can talk for free now.

Well. Could.

While the voice quality in Skype over 3G is simply astounding, the solution unfortunately still isn’t practical due to two issues:

  1. Skype sucks about 20% of battery per hour even when just idling in the background.
  2. Skype IM doesn’t know the concept of locations so all IM sent is replicated to all clients. This means that whenever I type something for a coworker, my phone will make a sound and vibrate.

2) I could work around by quitting Skype when in front of a PC, but 1) really is a killer. Maybe the new iPhone 4 (if I go that route instead of giving Andorid another try) with its bigger battery will be of help.

stabilizing tempalias

While the maintenance last weekend brought quite a bit of stabilization to the tempalias service, I quickly noticed that it was still dying sooner or later and while before updating node, it died due to not being able to allocate more memory, this time, it died by just not answering any requests any more.

A look at the error log quickly revealed quite many exceptions complaining about a certain request type not being allowed to have a body and finally one complaining about not being able to open a file due to having run out of file handles.

So I quickly improved error logging and restarted the daemon in order to get a stacktrace leading to these tons of exceptions.

This quickly pointed to paperboy which was sending the file even if the request was a HEAD request. http.js in node checks for this and throws whenever you send a body when you should not. That exception lead then to paperboy never closing the file (have I already complained how incredibly difficult it is to do proper exception handling the moment continuations get involved? I think not and I also think it’s a good topic for another diary entry). With the help of lsof I’ve quickly seen that my suspicions were true: the node process serving tempalias had tons of open handles to public/index.html.

I sent a patch for this behavior to @felixge which was quickly applied, so that’s fixed now. I hope it’s of some use for other people too.

Now knowing that having a look at lsof here and then might be a good idea, quickly revealed another problem: While the file handles were gone, I’ve noticed tons and tons of SMTP sockets staying open in CLOSE_WAIT state. Not good as that too will lead to handle starvation sooner or later.

On a hunch, I found out that connecting to the SMTP daemon and then disconnecting, not sending QUIT to let the server disconnect was what was causing the lingering sockets. Clients disconnecting like that is very common in case the sender sends a 5xx response which is what the tempalias daemon was designed for.

So I had to fix that in my fork of the node smtp daemon (the original upstream isn’t interested in daemon functionality and the owner I forked the daemon for doesn’t respond to my pull requests. Hence I’m maintaining my own fork for now).

Futher looks at lsof prove that now we are quite stable in resource consumption: No lingering connections, no unclosed file handles.

But the error log was still filling up. This time something about removeListener needing a function. Thanks to the callstack I now had in my error log, I quickly hunted that one down and fixed it – that was a very stupid mistake. Thankfully, because the mails I usually deliver are small enough so that socket draining usually wasn’t required.

Onwards to the next issue filling the error log: «This deferred has already been resolved».

This comes from the Promise.js library if you emit*() multiple times on the same promise. This time, of course, the callstack was useless (… at <anonymous> – why, thank you), but I was very lucky again in that I tested from home and my mail relay didn’t trust my home IP address and thus denied relaying with a 500 which immediately led to the exception.

Now, this one is crazy: When you call .addErrback() on a Promise before calling addCallback(), your callback will be executed no matter if the errback was executed first.

Promise.js does some really interesting things to simulate polymorphism in JavaScript and I really didn’t want to fix up that library as lately, node.js itself seems go to a simpler continuation style using a callback parameter, so sooner or later, I’ll have to patch up the smtp server library anyways to remove Promise.js if I want to adhere to current node style.

So I took the workaround route by just using addCallback() before addErrback() even though the other order feels more natural to me. In addition, I reported an issue with the author as this is clearly unexpected behavior.

Now the error log is pretty much silent (minus some ECONNRESET exceptions due to clients sending RST packets in mid-transfer, but I think they are uncritical to resource consumption), so I hope the overall stability of the site has improved a bunch – I’d love not having to restart the daemon for more than a day :-)

Do spammers find pleasure in destroying fun stuff?

Recently, while reading through the log file of the mail relay used by tempalias, I noticed a disturbing trend: Apparently, SPAM was being sent through tempalias.

I’ve seen various behaviours. One was to strangely create an alias per second to the same target and then delivering email there.

While I completely fail to understand this scheme, the other one was even more disturbing: Bots were registering {max-usage: 1, days: null} aliases and then sending one mail to them – probably to get around RBL checks they’d hit when sending SPAM directly.

Aside of the fact that I do not want to be helping spammers, this also posed a technical issue: node.js head which I was running back when I developed the service tended to leak memory at times forcing me to restart the service here and then.

Now the additional huge load created by the bots forced me to do that way more often than I wanted to. Of course, the old code didn’t run on current node any more.

Hence I had to take tempalias down for maintenance.

A quick look at my commits on GitHub will show you what I have done:

  • the tempalias SMTP daemon now does RBL checks and immediately disconnects if the connected host is listed.
  • the tempalias HTTP daemon also does RBL checks on alias creation, but it doesn’t check the various DUL lists as the most likely alias creators are most certainly listed in a DUL
  • Per IP, aliases can only be generated every 30 seconds.

This should be some help. In addition, right now, the mail relay is configured to skip sender-checks and sa-exim scans (Spam Assassin on SMTP time as to reject spam before even accepting it into the system) for hosts where relaying is allowed. I intend to change that so that sa-exim and sender verify is done regardless if the connecting host is the tempalias proxy.

Looking at the mail log, I’ve seen the spam count drop to near-zero, so I’m happy, but I know that this is just a temporary victory. Spammers will find ways around the current protection and I’ll have to think of something else (I do have some options, but I don’t want to pre-announce them here for obvious reasons).

On a more happy note: During maintenance I also fixed a few issues with the Bookmarklet which should now do a better job at not coloring all text fields green eventually and at using the target site’s jQuery if available.

Sprite avatars in Gravatar

After the release of Google Buzz, my Google profile which I had for years finally became somewhat useful. Seeing that I really liked the avatar I’ve added to that profile, I decided, that Frog should henceforth be my official avatar.

This also meant that I wanted to add Frog to my Gravatar profile which, unfortunately proved to be… let’s say interesting.

The image resizer Gravatar provides on their site to fit the uploaded image to the sites need apparently was not designed for sprites as it tries to blow up sprites way out of proportion only to resize them back down. At first I though I could get away with cheating by uploading above picture with a huge margin added to it, but that only lead to a JavaScript error in their uploader.

In the end, this is what I have done:

  1. Convert the picture into the TGA format
  2. Scale it using hq3x (explanation of hq3x)
  3. Convert it back to png and re-add transparency (hq3x had trouble with transparency in the TGA file)
  4. Scale it to 128 pixels in height
  5. paste it into a pre-prepared 128×128 canvas
  6. upload that.

This is how my gravatar looks now, which feels quite acceptable to me:

My Gravatar

The one in google’s profile was way easier to create: Paste the original image into a 64 by 64 canvas and let google do the resizing. It’s not as perfect as the hq3x algorithm, but that suffers by the downsizing to make frog fit 128 pixels in height anyways.

The other option would be to scale using hq2x and the paste that into a 128 by 128 canvas yielding this sharper, but smaller image:

But what ever I do, frog will still be resized by Gravatar (and thus destroyed), so I went with the image that contains more colored pixels at the expense of a bit of sharpness.

linktrail – a failed startup – introduction

I guess it’s inevitable. Good ideas may fail. And good ideas may be years ahead of their time. And of course, sometimes, people just don’t listen.

But one never stops learning.

In the year 2000, I took part in a plan of a couple of guys to become the next Yahoo (Google wasn’t quite there yet back then), or, to use the words we used on the site,

For these reasons, we have designed an online environment that offers a truly new way for people to store, manage and share their favourite online resources and enables them to engage in long-lasting relationships of collaboration and trust with other users.

The idea behind the project, called linktrail, was basically what would much later on be picked up by the likes of twitter, facebook (to some extent) and the various community based news sites.

The whole thing went down the drain, but the good thing is that I was able to legally salvage the source code, the install it on a personal server of mine and to publish the source code. And now that so many years have passed, it’s probably time to tell the world about this, which is why I have decided to start this little series about the project. What is it? How was it made? And most importantly: Why did it fail? And concequently: What could we have done better?

But let’s first start with the basics.

As I said, I was able to legally acquire the database and code (which is mostly written by me anyways) and to install the site on a server of mine, so let’s get that out to start with. The site is available at linktrail.pilif.ch. What you see running there is the result of 6 months of programming by myself after a concept done by the guys I’ve worked with to create this.

What is linktrail?

If the tour we made back then is any good, then just taking it would probably be enough, but let me phrase in my words: The site is a collection of so called trails which in turn are small units, comparable to blogs, consisting of links, titles and descriptions. These micro-blogs are shown in a popup window (that’s what we had back then) beside the browser window to allow quick navigation between the different links in the trail.

Trails are made by users, either by each user on their own or as a collaborative work between multiple users. The owner of a trail can hand out permissions to everybody or their friends (using a system quite similar to what we currently see on facebook for example)

A trail is placed in a directory of trails which was built around the directory structures we used back then, though by now, we would probably do this much more different. Users can subscribe to trails they are interested in. In that case, they will be notified if a trail they are subscribed to is updated either by the owner or anybody else with the rights to update the trail.

Every user (called expert in the site’s terms) has their profile page (here’s mine) that lists the trails they created and the ones they are subscribed to.

The idea was for you as an user to find others with similar interests and form a community around those interests to collaborate on trails. An in-site messaging-system helped users to communicate with each other: Aside of just sending plain text messages, it’s possible to recommend trails (for easy one-click subscription) .

linktrail was my first real programming project, basically 6 months after graduating in what the US would call high school. Combine that fact with the fact that it was created during the high times of the browser wars (year 2000, remember)  with web standards basically non-existing, then you can imagine what a mess is running behind the scenes.

Still, the site works fine within those constraints.

In future posts, I will talk about the history of the project, about the technology behind the site, about special features and, of course, about why this all failed and what I would do differently – both in matters of code and organization.

If I woke your interest, feel free to have a look at the code of the site which I just now converted from CVS (I started using CVS about 4 months into development, so the first commit is HUGE) to SVN to git and put it up on github for public consumption. It’s licensed under a BSD license, but I doubt that you’d find anything in this mess of PHP3(!) code (though it runs unchanged(!) on PHP5 – topic of another post I guess), HTML 3.2(!) tag soup and java-script hacks.

Oh and if you can read german, I have also converted the CVS repository that contained the concept papers that were written over the time.

In preparation of this series of blog-posts, I have already made some changes to the code base (available at github):

  • login after register now works
  • warning about unencrypted(!) passwords in the registration form
  • registering requires you to solve a reCAPTCHA.

Sense of direction vs. field of view

Last saturday, I bought the Metroid Prime Triloogy for the Wii. I didn’t yet have the Wii Metroid and it’s impossible for me to use the GameCube to play the old games as the distance between my couch and the reciever is too large for the GameCube’s wired joypads. It has been a long while since I last played any of the 3D Metroids, and seeing the box in a store made me want to play them again.

So all in all, this felt like a good deal to me: Getting the third Prime plus the possibility to easily play the older two for the same price that they once asked for the third one alone.

Now I’m in the middle of the first game and I made a really interesting observation: My usually very good sense of direction seems to require a minimum sized field of view to get going: While playing on the GameCube, I was constantly busy looking at the map and felt unable to recognize even the simplest landmarks.

I spent the game in a constant state of feeling lost, not knowing where to go and forgetting how to go back to places where I have seen then unreachable powerups.

Now it might just be that I remember the world from my first playthrough, but this time, playing feels completely differently to me: I constantly know where to go and where I am. Even with rooms that are very similar to each other, I constantly know where I am and how to get from point a to point b.

When I want to re-visit a place, I just go there. No looking at the map. No backtracking.

This is how I usually navigate the real world, so after so many years of feeling lost in 3D games, I’m finally able to find my way in them as well.

Of course I’m asking myself what has changed and in the end it’s either the generally larger screen size of the wide-screen format of the Wii port or maybe the controls via the Wiimote that feel much more natural: The next step for me will be to try and find out which it is by connecting the Wii to a smaller (but still wide) screen.

But aside of all that, Metroid just got even better – not that I believed that to be possible.

Alt-Space

Today, I was looking into the new jnlp_href way of launching a Java Applet. Just like applet-launcher, this allows one to create applets that depend on native libraries without the usual hassle of manually downloading the files and installing them.

Contrary to applet-launcher, it’s built into the later versions of Java 1.6 and it’s officially supported, so I have higher hopes concerning its robustness.

It’s even possible to keep the applet-launcher calls in there if the user has an older Java Plugin that doesn’t support jnlp_href yet.

So in the end, you just write a .jnlp file describing your applet and add

<param name="jnlp_href" value="http://www.example.com/path/to/your/file.jnlp">

and be done with it.

Unless of course, your JNLP file has a syntax error. Then you’ll get this in your error console (at least in case of this specific syntax error):

java.lang.NullPointerException
    at sun.plugin2.applet.Plugin2Manager.findAppletJDKLevel(Unknown Source)
    at sun.plugin2.applet.Plugin2Manager.createApplet(Unknown Source)
    at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Unknown Source)
    at java.lang.Thread.run(Unknown Source)
Ausnahme: java.lang.NullPointerException

How helpful is that?

Thanks, by the way, for insisting to display a half-assed German translation on my otherwise english OS: Never use locale info for determining the UI langauge, please.

Of course, this error does not give any indication of what the problem could be.

And even worse: The error in question is the topic of this blog post: It’s the dreaded Alt-Space character, 0xa0, or NBSP in ISO 8859-1.

0xa0 looks like a space, feels like a space, is incredibly easy to type instead of a space, but it’s not a space – not in the least. Depending on your compiler/parser, this will blow up in various ways:

pilif@celes ~ % ls | grep gnegg
zsh: command not found:  grep
pilif@celes ~ %
pilif@celes ~ % cat test.php
<?
echo "gnegg";
?>
pilif@celes ~ % php test.php
PHP Parse error:  syntax error, unexpected T_CONSTANT_ENCAPSED_STRING in /Users/pilif/test.php on line 2

Parse error: syntax error, unexpected T_CONSTANT_ENCAPSED_STRING in /Users/pilif/test.php on line 2
pilif@celes ~ %

and so on.

Now you people in the US with US keyboard layouts might think that I’m just one of those whiners – after all, how stupid must one be to press Alt-Space all the time? Probably stupid enough to deserve stuff like this.

Before you think these nasty thoughts, I ask you to consider the Swiss German keyboard layout though: Nearly all the characters use programmers use are accessed by pressing Alt-[some letter]. At least on the Mac. Windows uses AltGr, or right-alt, but on the mac, any alt will do.

So when you look at the shell line above:

ls | grep gnegg

you’ll see how easy it is to hit alt-space: First I type ls, then space. Then I press and hold alt-7 for the pipe and then, I am supposed to let go of alt and hit space. But because my left hand is on alt and the right one is pressing space, it’s very easy to hit space before letting go of alt.

Now instead of getting immediate feedback, nothing happens. It looks as if the space had been added, when in fact, something else has been added and that something is not recognized as a white space character and thus is something completely different from a space – despite looking exactly the same.

As much fun as reading hexdump -C output is – I need this to stop.

Dear internet! How can I make my Mac (or Linux when using the Mac keyboard layout) stop recognizing Alt-Space?

To take air out of the eventually arriving troll’s sails:

  • I won’t use Windows again. Thank you. Neither do I want to use Linux on my desktop.
  • I cannot use the US keybindings because my brain just can’t handle the keyboard layout changing all the time and as I’m a native German speaker, I do have to type umlauts here and then – actually often enough, so that the ¨+vocal combo isn’t acceptable.
  • While running Mac OS X, I’m stuck with the mac keyboard layout – I can’t use the Windows one.

Above JNLP error (printed here just in case somebody else has the same issue) caused me to lose nearly 5 hours of my life and will force me to work this weekend – who’d expect a XML parser error due to a space that isn’t one when seeing above call stack?

Update: A commenter on reddit.com has recommended to use Ukelele which I did and it helped me to create a custom keyboard layout that makes alt-space work like just space. That’s the best solution for my specific taste, so thanks a lot!