Dependent on working infrastructure

If you create and later deploy and run a web application, then you are dependent on a working infrastructure: You need a working web server, you need a working application server and in most cases, you’ll need a working database server.

Also, you’d want a solution that always and consistently works.

We’ve been using lighttpd/FastCGI/PHP for our deployment needs lately. I’ve preferred this to apache due to the easier configuration possible with lighty (out of the box automated virtual hosting for example), the potentially higher performance (due to long-running FastCGI processes) and the smaller amount of memory consumed by lighttpd.

But last week, I had to learn the price of walking off the beaten path (Apache, mod_php).

In one particular constellation, the lighty, fastcgi, php combination, running on a Gentoo box sometimes (read: 50% of the time) a certain script didn’t output all the data it should have. Instead, lighty randomly sent out RST packets. This without any indication of what could be wrong in any of the involved log files.

Naturally, I looked everywhere.

I read the source code of PHP. I’ve created reduced test cases. I’ve tried workarounds.

The problem didn’t go away until I tested the same script with Apache.

This is where I’m getting pragmatic: I depend on a working infrastructure. I need it to work. Our customers need it to work. I don’t care who is to blame. Is it PHP? Is it lighty? Is it Gentoo? Is it the ISP (though it would have to be on the senders end as I’ve seen the described failure with different ISPs)?

I don’t care.

My interest is in developing a web application. Not in deploying one. Not really, anyways.

I’m willing (and able) to fix bugs in my development environment. I may even be able to fix bugs in my deployment platform. But I’m certainly not willing to. Not if there is a competing platform that works.

So after quite some time with lighty and fastcgi, it’s back to Apache. The prospect of having a consistently working backed largely outweighs the theoretical benefits of memory savings, I’m afraid.

Ubuntu 8.04

I’m sure that you have heard the news: Ubuntu 8.04 is out.

Congratulations to Canonical and their community for another fine release of a really nice Linux distribution.

What prompted me to write this entry though is the fact that I have updated shion from 7.10 to 8.04 this afternoon. Over a SSH connection.

The whole process took about 10 minutes (including the download time) and was completely flawless. Everything kept working as it was before. After the reboot (which also went flawlessly), even OpenVPN came back up and connected to the office so I could have a look at how the update went.

This is very, very impressive. Updates are tricky. Especially considering that it’s not one application that’s updated, not even one OS. It’s a seemingly random collection of various applications with their interdependencies, making it virtually impossible to test each and every configuration.

This shows that with a good foundation, everything is possible – even when you don’t have the opportunity to test for each and every case.

Congratulations agin, Ubuntu team!

Web service authentication

When reading an article about how to make google reader work with authenticated feeds, one big flaw behind all those web 2.0 services sprang to my mind: Authentication.

I know that there are efforts underway to standardise on a common method of service authentication, but we are nowhere near there yet.

Take facebook: They offer you to enter your email account data into some form to send an invitation to all your friends. Or the article I was referring to: They want your account data for a authenticated feed to make them available in google reader.

But think of what you are giving away…

For your service provider to be able to interact with that other service, they need to store your passwort. Be it short term (facebook, hopefully) or long term (any online feed reader with authentication support). They can (and do) assure you that they will store the data in encrypted form, but to be able to access the service in the end, they need the unencrypted password, thus requiring them to not only use reversible encryption, but to also keep the encryption key around.

Do you want a company in a country whose laws you are not familiar with to have access to all your account data? Do you want to give them the password to your personal email account? Or to everything else in case you share passwords?

People don’t seem to get this problem as account data is freely given all over the place.

Efforts like OAuth are clearly needed, but as webbased technology, they clearly can’t solve all the problems (what about Email accounts for example).

But is this the right way? We can’t even trust desktop applications. Personally, I think the good old username/password combination is at the end of its usefulness (was it ever really useful?). We need new, better, ways for proving our identity. Something that is easily passed around and yet cannot be copied.

SSL client certificates feel like an underused but very interesting option. Let’s make two examples. The first one is your authenticated feed. The second one is your SSL-enabled email server. Let’s say that you want to give a web service revokable access to both services without ever giving away personal information.

For the authenticated feed, the external service will present the feed server with its client side certificate which you have signed. By checking your signature, the authenticated feed knows your identity and by checking your CRL it knows whether you authorized the access or not. The service doesn’t know your password and can’t use your signature for anything but accessing that feed.

The same goes for the email server: The third party service logs in with your username and the signed client certificate (signed by you), but without password. The service doesn’t need to know your password and in case they do something wrong, you revoke your signature and be done with it (I’m not sure whether mail servers support client certificates, but I gather they do as it’s part of the SSL spec).

Client side certificates already provide a standard means for secure authentication without ever passing a known secret around. Why isn’t it used way more often these days?

VMware shared folders and Visual Studio

ver since I’ve seen the light, I’m using git for every possible situation. Subversion is ok, but git is fun. It changed the way how I do developement. It allowed me to create ever so many fun-features for our product. Even in spare-time – without the fear of never completing and thus wasting them.

I have so many branches of all our projects – every one of them containing useful, but just not ready for prime-time feature. But when the time is right, I will be able to use that work. No more wasting it away because a bugfix touches the same file.

The day I dared to use git was the day that changed how I work.

Now naturally, I wanted to use all that freedom for my windows work aswell, but as you know, git just isn’t quite there yet. In fact, I had an awful lot of trouble with it, mainly because of it’s integrated SSH client that doesn’t work with my putty pageant-setup and stuff.

So I resorted to storing my windows development stuff on my mac file system and using VMware Fusion’s shared folder feature to access the source files.

Unfortunately, it didn’t work very well at first as this is what I got:

Error message saying that the 'Project location is not trusted'

I didn’t even try to find out what happens when I compile and run the project from there, so I pressed F1 and followed the instructions given there to get rid of the message that the “Project location is not trusted”.

I followed them, but it didn’t help.

I tried adding various UNC paths to the intranet zone, but neither worked.

Then I tried sharing the folder via Mac OS X’s built in SMB server. This time, the path I’ve set up using mscorcfg.msc actually seemed to do something. Visual Studio stopped complaining. And then I found out:

Windows treats host names containing a dot (.) as internet resources. Hostnames without dots are considered to be intranet resouces.

celeswindev worked in mscorcfg.msc because celes, not containing a dot, was counted as an intranet resource.

.host contains a dot and this is counted to be an internet resource.

This means that to make the .NET framework trust your VMWare shared folder, you have to add the path to the “Internet_Zone”. Not the “LocalIntranet_Zone”, because the framework loader doesn’t even look there.

Once I’ve changed that configuration, Visual Studio complained that it was unable to parse the host name – it seems to assume them not starting with a dot.

This was fixed by mapping the path to a drive letter like we did centuries ago.

Now VS is happy and I can have the best of all worlds:

  • I can keep my windows development work in a git repository
  • I have a useful (and working) shell and ssh-agent to actually “git svn dcommit” my work
  • I don’t have to export any folders of my mac via SMB
  • Time Machine now also backs up my Windows Work which I had to do manually until now.

Very nice indeed, but now back to work (with git :-) ).

git branch in ZSH prompt

Screenshot of the terminal showing the current git branch

Today, I came across a little trick on how to output the current git branch on your bash prompt. This is very useful, but not as much for me as I’m using ZSH. Of course, I wanted to adapt the method (and to use fewer backslashes :-) ).

Also, in my setup, I’m making use of ZSH’s prompt themes feature of which I’ve chosen the theme “adam1”. So let’s use that as a starting point.

  1. First, create a copy of the prompt theme into a directory of your control where you intend to store private ZSH functions (~/zshfuncs in my case).
    cp /usr/share/zsh/4.3.4/functions/prompt_adam1_setup ~/zshfuncs/prompt_pilif_setup
  2. Tweak the file. I’ve adapted the prompt from the original article, but I’ve managed to get rid of all the backslashes (to actually make the regex readable) and to place it nicely in the adam1 prompt framework.
  3. Advise ZSH about the new ZSH function directory (if you haven’t already done so).
    fpath=(~/zshfunc $fpath)
  4. Load your new prompt theme.
    prompt pilif

And here’s the adapted adam1 prompt theme:

# pilif prompt theme

prompt_pilif_help () {
  cat <<'EOF'
This prompt is color-scheme-able.  You can invoke it thus:

  prompt pilif [<color1> [<color2> [<color3>]]]

This is heavily based on adam1 which is distributed with ZSH. In fact,
the only change from adam1 is support for displaying the current branch
of your git repository (if you are in one)
EOF
}

prompt_pilif_setup () {
  prompt_adam1_color1=${1:-'blue'}
  prompt_adam1_color2=${2:-'cyan'}
  prompt_adam1_color3=${3:-'green'}

  base_prompt="%{$bg_no_bold[$prompt_adam1_color1]%}%n@%m%{$reset_color%} "
  post_prompt="%{$reset_color%}"

  base_prompt_no_color=$(echo "$base_prompt" | perl -pe "s/%{.*?%}//g")
  post_prompt_no_color=$(echo "$post_prompt" | perl -pe "s/%{.*?%}//g")

  precmd  () { prompt_pilif_precmd }
  preexec () { }
}

prompt_pilif_precmd () {
  setopt noxtrace localoptions
  local base_prompt_expanded_no_color base_prompt_etc
  local prompt_length space_left
  local git_branch

  git_branch=`git branch 2>/dev/null | grep -e '^*' | sed -E 's/^* (.+)$/(1) /'`
  base_prompt_expanded_no_color=$(print -P "$base_prompt_no_color")
  base_prompt_etc=$(print -P "$base_prompt%(4~|...|)%3~")
  prompt_length=${#base_prompt_etc}
  if [[ $prompt_length -lt 40 ]]; then
    path_prompt="%{$fg_bold[$prompt_adam1_color2]%}%(4~|...|)%3~%{$fg_bold[white]%}$git_branch"
  else
    space_left=$(( $COLUMNS - $#base_prompt_expanded_no_color - 2 ))
    path_prompt="%{$fg_bold[$prompt_adam1_color3]%}%${space_left}<...<%~ %{$reset_color%}$git_branch%{$fg_bold[$prompt_adam1_color3]%} $prompt_newline%{$fg_bold_white%}"
  fi

  PS1="$base_prompt$path_prompt %# $post_prompt"
  PS2="$base_prompt$path_prompt %_&gt; $post_prompt"
  PS3="$base_prompt$path_prompt ?# $post_prompt"
}

prompt_pilif_setup "$@"

The theme file can be downloaded here

This tram runs Microsoft® Windows® XP™

Image of the new station information system in some of Zurich's tramways with a Windows GPF on top of the display

The trams here in Zürich recently were upgraded with a really useful system providing an overview over the next couple of stations and the times when they will be reached.

Today, I managed to grab this picture which once again shows clearly why windows maybe isn’t the right platform for something like this. Also, have a look at the amount of applications in the taskbar (I know, the picture is bad, but that’s all I can get out of my mobile phone)…

If I was tasked with implementing something like this, I’d probably use Linux in the backend and a webbrowser as s frontend. That way it’s easier to debug, more robust and less embarassing if it blows up.

Shell history stats

It seems to be cool nowadays to post the output of a certain unix command to ones blogs. So here I come:

pilif@celes ~
 % fc -l 0 -1 |awk '{a[$2]++ } END{for(i in a){print a[i] " " i}}'|sort -rn|head
467 svn
369 cd
271 mate
243 git
209 ssh
199 sudo
184 grep
158 scp
124 rm
115 ./clitest.sh

clitest.sh is a small little wrapper around wget which I use to do protocol level debugging of the PopScan Server.

Converting Java keytool-certificates

To be able to read barcodes from connected barcode-scanners into the webbased version of PopScan, we have to use a signed applet – there is no other way for getting the needed level of hardware access without signing your applet.

The signature, by the way, doesn’t at all prevent any developer from doing bad stuff – it just puts their signature below it (literally), so it kind of raises the bar to distribute malware that way – after all, the checks when applying for a certificate usually are very rigid, so there is no way anybody could forge their application, so the origin of any piece of code is very tracable.

But there is no validation done of the actual code to be signed and I doubt that the certificate authorities out there actually revoke certificates used to certify malware, thought that remains to be seen.

Anyways. Back to the topic.

In addition to the Java Applet, we also develop the windows client frontend to the PopScan server. And we have a small frontend to run on Windows CE (or Windows Mobile) based barcode capable devices. Traditionally, both of these were never signed.

But lately with Vista and Windows Mobile 6, signing becomes more and more important: Both systems complain in variable loudness about unsigned code, so I naturally prefer the code to be signed – we DO have a code signing certificate after all – for our Applet.

Now the thing is that keytool, Java’s way of handling code signing keys doesn’t allow a private key to be exported. This means that there was no obvious way for me to ever use the certificate we got for our applet to sign Windows EXEs.

Going back to the CA and ask them to send over an additional certificate was no option for me: Aside of the fact that it would certainly have cost another two years fee, this would have ment to prove our identity all over again – one year too early as our current certificate is valid till 2009.

But then, I found a solution. Here’s how you convert a java keystore certificate to something you can use with Microsoft’s Authenticode:

  1. Start KeyTool GUI
  2. In the Treeview, click “Export”, “Private Key”
  3. Select your java keystore-file
  4. Enter two trarget file names for your key and the certificate chain (and select PEM format)
  5. Click OK

Now you will have two more files. One is your private key (I’ve named it key.pem), the other is the certificate chain (named cert.pem in my case). Now, use OpenSSL to covert this into something Microsoft likes to see:

% openssl pkcs12 -inkey key.pem -in cert.pem -out keypair.pfx -export

openssl will ask for a password to encrypt the pfx file with and you’ll be done. Now you can use the pfx-file like any other pfx file you recived from your certificate authority (double click it to install it or use it with signcode.exe to directly sign your code).

Remember to delete key.pem as it’s the unencrypted private key!

Old URLs fixed

I have just added two rewrite rules to automatically translate most of the old s9y-URLs to something WordPress understands.

The first one was easy and could be done in WP’s .htaccess-file:

RewriteRule ^archives/([0-9]+)/([0-9]+).html$ /$1/$2 [R=permanent,L]

This handles the s9y-style archive URLs for monthly archives – something that got quite the amount of hits apparently – at least that’s one of the 404 errors I’ve encountered the most in my logfiles.

The second one is the direct link to old posts. While this could be done in a PHP/.htaccess-only solution, I took the opportunity and learned how to do custom url maps for mod_rewrite which, of course, only work in the httpd.conf, so this isn’t probably something everyone can do on their hosting plan:

RewriteEngine On
RewriteMap s9yconv prg:/home/pilif/url-s9y2wp.php

After defining this, I could use the map in WP’s .htaccess:

RewriteRule ^archives/([0-9]+)-(.*).html$ /${s9yconv:$2} [R=permanent,L]

The script is very simple as you can see here:

#!/usr/bin/php
<?php
include('wp/wp-includes/formatting.php');
while (($line = fgets(STDIN)) !== false){
    $line = preg_replace('#.html$#', '', $line);
    $line = sanitize_title_with_dashes(preg_replace('#^[0-9]+-#', '', $line));
    echo "$linen";
}
?>

While WP is configured to create permalinks containing the date, you can usually just feed it the URL-ized title and it’ll find out the correct entry to use. This has the advantage that the script, which is long-running per the specification of prg-rewrite maps, is kept as simple as possible, which is needed as PHP doesn’t always free all allocated memory – something you don’t want to have in long-running processes like this one. This is why I redirect to something WP still has to do some work on: It spares me to do all the database-handling and stuff.

If I had to do this without the ability to change httpd.conf, I would use a rule like this:

RewriteRule ^archives/([0-9]+)-(.*).html$ /s9y-convert.php/$2 [L]

and then do above logic in that script.

Both approaches work the same, but I wanted to try out how to do a dynamic rewrite map.

Thanks, Ebi

Yesterday, Ebi invited me and my girlfriend over for dinner and a round of trivial pursuit.

I fail to find words to describe how awesomly good the meal has been. I would have wanted to get a fourth serving, but I just couldn’t stuff in even a microgram more.

And the trivial pursuit was fun as ever – that game just shines if you don’t take it seriously.

Thanks Ebi. I had a blast!