Dependent on working infrastructure

If you create and later deploy and run a web application, then you are dependent on a working infrastructure: You need a working web server, you need a working application server and in most cases, you’ll need a working database server.

Also, you’d want a solution that always and consistently works.

We’ve been using lighttpd/FastCGI/PHP for our deployment needs lately. I’ve preferred this to apache due to the easier configuration possible with lighty (out of the box automated virtual hosting for example), the potentially higher performance (due to long-running FastCGI processes) and the smaller amount of memory consumed by lighttpd.

But last week, I had to learn the price of walking off the beaten path (Apache, mod_php).

In one particular constellation, the lighty, fastcgi, php combination, running on a Gentoo box sometimes (read: 50% of the time) a certain script didn’t output all the data it should have. Instead, lighty randomly sent out RST packets. This without any indication of what could be wrong in any of the involved log files.

Naturally, I looked everywhere.

I read the source code of PHP. I’ve created reduced test cases. I’ve tried workarounds.

The problem didn’t go away until I tested the same script with Apache.

This is where I’m getting pragmatic: I depend on a working infrastructure. I need it to work. Our customers need it to work. I don’t care who is to blame. Is it PHP? Is it lighty? Is it Gentoo? Is it the ISP (though it would have to be on the senders end as I’ve seen the described failure with different ISPs)?

I don’t care.

My interest is in developing a web application. Not in deploying one. Not really, anyways.

I’m willing (and able) to fix bugs in my development environment. I may even be able to fix bugs in my deployment platform. But I’m certainly not willing to. Not if there is a competing platform that works.

So after quite some time with lighty and fastcgi, it’s back to Apache. The prospect of having a consistently working backed largely outweighs the theoretical benefits of memory savings, I’m afraid.

Ubuntu 8.04

I’m sure that you have heard the news: Ubuntu 8.04 is out.

Congratulations to Canonical and their community for another fine release of a really nice Linux distribution.

What prompted me to write this entry though is the fact that I have updated shion from 7.10 to 8.04 this afternoon. Over a SSH connection.

The whole process took about 10 minutes (including the download time) and was completely flawless. Everything kept working as it was before. After the reboot (which also went flawlessly), even OpenVPN came back up and connected to the office so I could have a look at how the update went.

This is very, very impressive. Updates are tricky. Especially considering that it’s not one application that’s updated, not even one OS. It’s a seemingly random collection of various applications with their interdependencies, making it virtually impossible to test each and every configuration.

This shows that with a good foundation, everything is possible – even when you don’t have the opportunity to test for each and every case.

Congratulations agin, Ubuntu team!

Web service authentication

When reading an article about how to make google reader work with authenticated feeds, one big flaw behind all those web 2.0 services sprang to my mind: Authentication.

I know that there are efforts underway to standardise on a common method of service authentication, but we are nowhere near there yet.

Take facebook: They offer you to enter your email account data into some form to send an invitation to all your friends. Or the article I was referring to: They want your account data for a authenticated feed to make them available in google reader.

But think of what you are giving away…

For your service provider to be able to interact with that other service, they need to store your passwort. Be it short term (facebook, hopefully) or long term (any online feed reader with authentication support). They can (and do) assure you that they will store the data in encrypted form, but to be able to access the service in the end, they need the unencrypted password, thus requiring them to not only use reversible encryption, but to also keep the encryption key around.

Do you want a company in a country whose laws you are not familiar with to have access to all your account data? Do you want to give them the password to your personal email account? Or to everything else in case you share passwords?

People don’t seem to get this problem as account data is freely given all over the place.

Efforts like OAuth are clearly needed, but as webbased technology, they clearly can’t solve all the problems (what about Email accounts for example).

But is this the right way? We can’t even trust desktop applications. Personally, I think the good old username/password combination is at the end of its usefulness (was it ever really useful?). We need new, better, ways for proving our identity. Something that is easily passed around and yet cannot be copied.

SSL client certificates feel like an underused but very interesting option. Let’s make two examples. The first one is your authenticated feed. The second one is your SSL-enabled email server. Let’s say that you want to give a web service revokable access to both services without ever giving away personal information.

For the authenticated feed, the external service will present the feed server with its client side certificate which you have signed. By checking your signature, the authenticated feed knows your identity and by checking your CRL it knows whether you authorized the access or not. The service doesn’t know your password and can’t use your signature for anything but accessing that feed.

The same goes for the email server: The third party service logs in with your username and the signed client certificate (signed by you), but without password. The service doesn’t need to know your password and in case they do something wrong, you revoke your signature and be done with it (I’m not sure whether mail servers support client certificates, but I gather they do as it’s part of the SSL spec).

Client side certificates already provide a standard means for secure authentication without ever passing a known secret around. Why isn’t it used way more often these days?

VMware shared folders and Visual Studio

ver since I’ve seen the light, I’m using git for every possible situation. Subversion is ok, but git is fun. It changed the way how I do developement. It allowed me to create ever so many fun-features for our product. Even in spare-time – without the fear of never completing and thus wasting them.

I have so many branches of all our projects – every one of them containing useful, but just not ready for prime-time feature. But when the time is right, I will be able to use that work. No more wasting it away because a bugfix touches the same file.

The day I dared to use git was the day that changed how I work.

Now naturally, I wanted to use all that freedom for my windows work aswell, but as you know, git just isn’t quite there yet. In fact, I had an awful lot of trouble with it, mainly because of it’s integrated SSH client that doesn’t work with my putty pageant-setup and stuff.

So I resorted to storing my windows development stuff on my mac file system and using VMware Fusion’s shared folder feature to access the source files.

Unfortunately, it didn’t work very well at first as this is what I got:

Error message saying that the 'Project location is not trusted'

I didn’t even try to find out what happens when I compile and run the project from there, so I pressed F1 and followed the instructions given there to get rid of the message that the “Project location is not trusted”.

I followed them, but it didn’t help.

I tried adding various UNC paths to the intranet zone, but neither worked.

Then I tried sharing the folder via Mac OS X’s built in SMB server. This time, the path I’ve set up using mscorcfg.msc actually seemed to do something. Visual Studio stopped complaining. And then I found out:

Windows treats host names containing a dot (.) as internet resources. Hostnames without dots are considered to be intranet resouces.

celeswindev worked in mscorcfg.msc because celes, not containing a dot, was counted as an intranet resource.

.host contains a dot and this is counted to be an internet resource.

This means that to make the .NET framework trust your VMWare shared folder, you have to add the path to the “Internet_Zone”. Not the “LocalIntranet_Zone”, because the framework loader doesn’t even look there.

Once I’ve changed that configuration, Visual Studio complained that it was unable to parse the host name – it seems to assume them not starting with a dot.

This was fixed by mapping the path to a drive letter like we did centuries ago.

Now VS is happy and I can have the best of all worlds:

  • I can keep my windows development work in a git repository
  • I have a useful (and working) shell and ssh-agent to actually “git svn dcommit” my work
  • I don’t have to export any folders of my mac via SMB
  • Time Machine now also backs up my Windows Work which I had to do manually until now.

Very nice indeed, but now back to work (with git :-) ).

git branch in ZSH prompt

Screenshot of the terminal showing the current git branch

Today, I came across a little trick on how to output the current git branch on your bash prompt. This is very useful, but not as much for me as I’m using ZSH. Of course, I wanted to adapt the method (and to use fewer backslashes :-) ).

Also, in my setup, I’m making use of ZSH’s prompt themes feature of which I’ve chosen the theme “adam1”. So let’s use that as a starting point.

  1. First, create a copy of the prompt theme into a directory of your control where you intend to store private ZSH functions (~/zshfuncs in my case).
    cp /usr/share/zsh/4.3.4/functions/prompt_adam1_setup ~/zshfuncs/prompt_pilif_setup
  2. Tweak the file. I’ve adapted the prompt from the original article, but I’ve managed to get rid of all the backslashes (to actually make the regex readable) and to place it nicely in the adam1 prompt framework.
  3. Advise ZSH about the new ZSH function directory (if you haven’t already done so).
    fpath=(~/zshfunc $fpath)
  4. Load your new prompt theme.
    prompt pilif

And here’s the adapted adam1 prompt theme:

# pilif prompt theme

prompt_pilif_help () {
  cat <<'EOF'
This prompt is color-scheme-able.  You can invoke it thus:

  prompt pilif [<color1> [<color2> [<color3>]]]

This is heavily based on adam1 which is distributed with ZSH. In fact,
the only change from adam1 is support for displaying the current branch
of your git repository (if you are in one)
EOF
}

prompt_pilif_setup () {
  prompt_adam1_color1=${1:-'blue'}
  prompt_adam1_color2=${2:-'cyan'}
  prompt_adam1_color3=${3:-'green'}

  base_prompt="%{$bg_no_bold[$prompt_adam1_color1]%}%n@%m%{$reset_color%} "
  post_prompt="%{$reset_color%}"

  base_prompt_no_color=$(echo "$base_prompt" | perl -pe "s/%{.*?%}//g")
  post_prompt_no_color=$(echo "$post_prompt" | perl -pe "s/%{.*?%}//g")

  precmd  () { prompt_pilif_precmd }
  preexec () { }
}

prompt_pilif_precmd () {
  setopt noxtrace localoptions
  local base_prompt_expanded_no_color base_prompt_etc
  local prompt_length space_left
  local git_branch

  git_branch=`git branch 2>/dev/null | grep -e '^*' | sed -E 's/^* (.+)$/(1) /'`
  base_prompt_expanded_no_color=$(print -P "$base_prompt_no_color")
  base_prompt_etc=$(print -P "$base_prompt%(4~|...|)%3~")
  prompt_length=${#base_prompt_etc}
  if [[ $prompt_length -lt 40 ]]; then
    path_prompt="%{$fg_bold[$prompt_adam1_color2]%}%(4~|...|)%3~%{$fg_bold[white]%}$git_branch"
  else
    space_left=$(( $COLUMNS - $#base_prompt_expanded_no_color - 2 ))
    path_prompt="%{$fg_bold[$prompt_adam1_color3]%}%${space_left}<...<%~ %{$reset_color%}$git_branch%{$fg_bold[$prompt_adam1_color3]%} $prompt_newline%{$fg_bold_white%}"
  fi

  PS1="$base_prompt$path_prompt %# $post_prompt"
  PS2="$base_prompt$path_prompt %_&gt; $post_prompt"
  PS3="$base_prompt$path_prompt ?# $post_prompt"
}

prompt_pilif_setup "$@"

The theme file can be downloaded here

This tram runs Microsoft® Windows® XP™

Image of the new station information system in some of Zurich's tramways with a Windows GPF on top of the display

The trams here in Zürich recently were upgraded with a really useful system providing an overview over the next couple of stations and the times when they will be reached.

Today, I managed to grab this picture which once again shows clearly why windows maybe isn’t the right platform for something like this. Also, have a look at the amount of applications in the taskbar (I know, the picture is bad, but that’s all I can get out of my mobile phone)…

If I was tasked with implementing something like this, I’d probably use Linux in the backend and a webbrowser as s frontend. That way it’s easier to debug, more robust and less embarassing if it blows up.

Shell history stats

It seems to be cool nowadays to post the output of a certain unix command to ones blogs. So here I come:

pilif@celes ~
 % fc -l 0 -1 |awk '{a[$2]++ } END{for(i in a){print a[i] " " i}}'|sort -rn|head
467 svn
369 cd
271 mate
243 git
209 ssh
199 sudo
184 grep
158 scp
124 rm
115 ./clitest.sh

clitest.sh is a small little wrapper around wget which I use to do protocol level debugging of the PopScan Server.