Double-blind cola test

The final analysis

Two of my coworkers decided today after lunch that it was time to solve the age-old question: Is it possible to actually detect different kinds of cola just by tasting them.

In the spirit of true science (and a hefty dose of Mythbusters), we decided to do this the right way and to create a double blind test. The idea is that not only the tester has to not know the different test subjects, but also the person administering the test to make sure that the tester is not influenced in any way.

So here’s what we have done:

  1. We bought 5 different types of cola: A can of coke light, a can of standard coke, a PET bottle of standard coke, a can of coke zero and finally, a can of the new Red Bull cola (in danger of spoiling the outcome: eek).
  2. We marked five glasses with numbers from 1 to 5 at the bottom.
  3. We asked a coworker not taking part in the actual test to fill the glasses with the respective drink.
  4. We put the glasses on our table in random order and designated each glasses position with letters from A to E.
  5. One after another, we drank the samples and noted which glass (A-E) we thought to contain what drink (1-5). As to not influence ourselves during the test, the kitchen area was off-limits for everyone but the test subject and each persons results where to be kept secret until the end of the test.
  6. We compared notes.
  7. We checked the bottom of the glasses to see how we fared.

The results are interesting:

  • Of the four people taking part in the test, all but one person guessed all types correctly. The one person who failed wasn’t able to correctly distinguish between bottled and canned standard coke.
  • Everyone instantly recognized the Red Bull Cola (no wonder there, it’s much brighter than the other contenders and it smells like cough medicine)
  • Everyone got the coke light and zero correctly.
  • Although the tester pool was way too small, it’s interesting that 75% of the testers were able to discern the coke in the bottle from the coke in the can – I would not have guessed that, but then, there’s only a 50% chance to be wrong on this one – we may all just have been lucky – at least I was, to be honest.

Fun in the office doing pointless stuff after lunch, I guess.

New MacMini (early 09) and Linux

The new MacMinis that were announced this week come with a Firewire 800 port which was reason enough for me to update shion yet again (keeping the host name of course).

All my media she’s serving to my various systems is stored on a second generation Drobo which is currently connected via USB2, but has a lingering FW800 port.

Of course the upgrade to FW800 will not double the transfer rate to and from the drobo, but it should increase it significantly, so I went ahead and got one of the new Minis.

As usual, I entered the Ubuntu (Intrepid) CD, hold c while turning the device on and completed the installation.

This left the Mini in an unbootable state.

It seems that this newest generation of Mac Hardware isn’t capable of booting from an MBR partitioned harddrive. Earlier Macs complained a bit if the harddrive wasn’t correctly partitioned, but then went ahead and booted the other OS anyways.

Not so much with the new boxes it seems.

To finally achieve what I wanted I had to do the following complicated procedure:

  1. Install rEFIt (just download the package and install the .mpkg file)
  2. Use the Bootcamp assistant to repartition the drive.
  3. Reboot with the Ubuntu Desktop CD and run parted (the partitioning could probably be accomplished using the console installer, but I didn’t manage to do it correctly).
  4. Resize the FAT32-partition which was created by the Bootcamp partitioner to make room at the end for the swap partition.
  5. Create the swap partition.
  6. Format the FAT32-partition with something useful (ext3)
  7. Restart and enter the rEFIt partitioner tool (it’s in the boot menu)
  8. Allow it to resync the MBR
  9. Insert the Ubuntu Server CD, reboot holding the C key
  10. Install Ubuntu normally, but don’t change the partition layout – just use the existing partitions.
  11. Reboot and repeat steps 7 and 8
  12. Start Linux.

Additionally, you will have to keep using rEFIt as the boot device control panel item does not recognize the linux partitions any more, so can’t boot from them.

Now to find out whether that stupid resistor is still needed to make the new mini boot headless.

All-time favourite tools – update

It has been more than four years since I’ve last talked about my all-time favourite tools. I guess it’s time for an update.

Surprisingly, I still stand behind the tools listed there: My love for Exim is still un-changed (it just got bigger lately – but that’s for another post). PostgreSQL is cooler than ever and powers PopScan day-in, day-out without flaws.

Finally, I’m still using InnoSetup for my Windows Setup programs, though that has lost a bit of importance in my daily work as we’re shifting more and more to the web.

Still. There are two more tools I must add to the list:

  • jQuery is a JavaScript helper libary that allows you to interact with the DOM of any webpage, hiding away browser incompatibilities. There are a couple of libraries out there which do the same thing, but only jQuery is such a pleasure to work with: It works flawlessly, provides one of the most beautiful APIs I’ve ever seen in any library and there are tons and tons of self-contained plug-ins out there that help you do whatever you would want to on a web page.
    jQuery is an integral part of making web applications equivalent to their desktop counterparts in matters of user interface fluidity and interactivity.
    All while being such a nice API that I’m actually looking forward to do the UI work – as opposed to the earlier days which can most accurately be described as UI sucks.
  • git is my version control system of choice. There are many of them out there in the world and I’ve tried the majority of them for one thing or another. But only git combines the awesome backwards-compatibility to what I’ve used before and what’s still in use by my coworkers (SVN) with abilities to beautify commits, have feature branches, very high speed of execution and very easy sharing of patches.
    No single day passes without me using git and running into a situation where I’m reminded of the incredible beauty that is git.

In four years, I’ve not seen one more other tool I’ve as consistenly used with as much joy as git and jQuery, so those two certainly have earned their spot in my heart.

Google Apps: Mail Routing

Just today while beginning the evaluation of a Google Apps For Your Domain Premium account, I noticed something that may be obvious to all of you Google Apps user out there, but certainly isn’t documented well enough for you to notice before you sign up:

Google Apps Premium has kick-ass mail routing functionality.

Not only can you configure Gmail to only accept mails from defined upstream-server, thus allowing you to keep the MX to some already existing server where you can do alias resolution for example. No. You can also tell Gmail to send outgoing mail via an external relay.

This is ever so helpful as it allows you to keep all the control you need over incoming email – for example if you have email-triggered applications running. Or you have email-aliases (basically forwarders where xxx@domain.com is forwarded to yyy@other-domain.com) which Google Apps does not support.

Because you can keep your old MX, your existing applications keep working and your aliases continue to resolve.

Allowing you to send all outgoing mail via your relay, in turn, allows you to get away without updating SPF records and forcing customers to change filters they may have set up for you.

This feature alone can decide between a go or no-go when evaluating Google Apps and I cannot understand why they have not emphasized on this way more than they currently do.

My new friend: git rebase -i

Last summer, I was into making git commits look nice with the intent of pushing a really nice and consistent set of patches to the remote repository.

The idea is that a clean remote history is a convenience for my fellow developers and for myself. A clean history means very well-defined patches – should a merge of a branch be neccesary in the future. It also means much easier hunting for regressions and generally more fun doing some archeology in the code.

My last post was about using git add -i to refine the commits going into the repository. But what if you screw up the commit anyways? What if you forget to add a new file and notice it only some commits later?

This is where git rebase -i comes into play as this allows you to reorder your local commits and to selectively squash multiple commits into one.

Let’s see how we would add a forgotten file to a commit a couple of commits ago.

  1. You add the forgotten file and commit it. The commit message doesn’t really matter here.
  2. You use git log or gitk to find the commit id before the one you want to amend this new file to. Let’s say it’s 6bd80e12707c9b51c5f552cdba042b7d78ea2824
  3. Pick the first few characters (or the whole ID) and pass them to git rebase -i.
 % git rebase -i 6bd80e12

git will now open your favorite editor displaying your list of commits since the revision you have given. This could look like this.

pick 6bd80e1 some commit message. This is where I have forgotten the file
pick 4c1d210 one more commit message
pick 5d2f4ed this is my forgotten file

# Rebase fc9a0c6..5d2f4ed onto fc9a0c6
#
# Commands:
#  p, pick = use commit
#  e, edit = use commit, but stop for amending
#  s, squash = use commit, but meld into previous commit
#
# If you remove a line here THAT COMMIT WILL BE LOST.
# However, if you remove everything, the rebase will be aborted.
#

The comment in the file says it all – just reorder the first three (or how many there are in your case) to look like this:

pick 6bd80e1 some commit message. This is where I have forgotten the file
squash 5d2f4ed this is my forgotten file
pick 4c1d210 one more commit message

Save the file. Git will now do some magic and open the text editor again where you can amend the commit message for the commit you squashed your file into. If it’s really just a forgotten file, you’ll probably keep the message the same.

One word of caution though: Do not do this on branches you have already pushed to a remote machine or otherwise shared with somebody else. git gets badly confused if it has to pull altered history.

Isn’t it nice that after moths you still find new awesomeness in your tool of choice?

I guess I’ll have to update my all-time favorite tools list. It’s from 2004, so it’s probably ripe for that update.

Git rules.

The consumer loses once more

DRM strikes again. This time, apparently, the PC version of Gears of War stopped working. This time it seems to be caused by an expired certificate.

Even though I do not play Gears of War, I take issue in this because of a multitude of problems:

First, it’s another reason where DRM does nothing to stop piracy but punishes the honest user for buying the original – no doubt, the cracked versions of the game will continue to work due to the stripped out certificate check.

Second, using any form of DRM with any type of media is incredibly shortsighted if it requires any external support to work correctly. Be it a central authorization server, be it a correct clock – you name it. Sooner or later you won’t sell any more of your media and thus you will shut your DRM servers down, screwing the most loyal of your customers.

This is especially apparent with the games market. Like no other market, there exists a really vivid and ever growing community of retro gamers. Like no other type of media, games seem to make users to want to go back to them and see them again – even after ever so many years.

Older games are speedrunned, discussed and even utterly destroyed. Even if the count in players declines over the years, it will never reach zero.

Now imagine DRM in all those old games once you turn off the DRM server or a certificate expires: No more speedruns. No more discussion forums. Nothing. The games are devalued and you as a game producer shut out your most loyal customers (those that keep playing your game after  many years).

And my last issue is with this Gears of War case in particular: A time limited certificate does not make any sense in this case. It’s identity that must be checked. Let’s say the AES key used to encrypt the game was encrypted with the private key of the publisher (thus the public key will be needed to decrypt it) and the public key is signed by the publishers CA, then, while you check the identity of the publishers certificate, checking the time certainly is not needed. If it was valid once, it’s probably valid in the future as well.

Or better: A cracker with the ability to create certificates that look like they were signed by the publisher will highly likely also be able to make them with any timed validity.

This issue here is that Gears of War probably uses some library function to check for the certificate and this library function also checks the timestamp on the certificate. The person that issued the certificate either thought that “two years is well enough” or just used the default value in their software.

The person using the library function just uses that, not thinking about the timestamp at all.

Maybe, the game just calls some third-party DRM library which in turn calls the X.509 certificate validation routines and due to “security by obscurity” doesn’t document how the DRM works, thus not even giving the developer (or certificate issuer) any chance to see that the game will stop working once the certificate runs out.

This is lazyness.

So it’s not just monetary issues that would lead to DRMed stuff stop working. It’s also lazyness and wrong sense of security.

DRM is doomed to fail and the industry finally needs to see that.

Managed switch

Yesterday I’ve talked about configuring a VLAN in my home network.

VLAN is a technology using some bits in Ethernet frames to create virtual network segments on the same physical network, but just go ahead and read the linked Wikipedia article as it’s more detailed than what I would want to go into.

To really make use of VLANs, you are going to need at least one managed switch (two in my case). I knew this and I was looking around for something useful.

In the end, I ended up with two HP ProCurve 1800-8G’s: I wanted something that has at least 8 ports and was Gigabit capable as I was feeling the bandwidth cap on the previous 100M connection between shion and my media center when streaming 1080p content.

That’s something I hope to solve with the 1G connection, though the drobo may still be the limiting factor here, but theoretical 480Mbit is better (where are the MacMinis with the Firewire800 interface?) than the 100MBit I was constrained to with the old setup.

The ProCurves are fanless, provide 8 ports and have a really nice web interface which is very easy to use and works on all browsers (as opposed to some linksys things which only work with IE6 (not even IE7 does the trick)). Also, the interface is very responsive and it even comes with an excellent online help.

With only 10 minutes of thought going into the setup and another 5 minutes to configure the two switches I was ready to hook them up and got instant satisfaction: In my server-room I plugged a test machine to any of the ports 2-7 and got onto VLAN1 (the internal network). Then I plugged it into port 8 and promptly was on VLAN2 (as evidenced by the public IP I got).

I have only three minor issues with the configuration of the two switches so far:

  1. They come with an empty administration password by default and don’t force you to change it. Now granted, on a switch you cannot do as many mischief as on a router or worse, a NAS or access point, but it’s still not a good thing.
  2. They come preconfigured with the address 192.168.2.10 and DHCP disabled, practically forcing you to configure them locally before plugging them. I would have hoped for either DHCP enabled or, even better, the possibility of configuring them using RARP. Or they could provide a serial interface which they do not.
  3. To reset them, you have to unplug them, connect port 1 with port 2 and restart them. While this prevents you from accidentally resetting them, the procedure is a pain to do and when the time comes that I will have to do this, I’ll probably have forgotten the procedure.

But these are minor issues. The quick web interface, the excellent online help and the small fanless design make this the optimal switch once you have advanced requirements to fulfill despite not needing more than 8 ports.

There’s a larger 24 port cousin of the 1800-8G, but that one has a fan, so it was no option in my case – especially not in the sideboard where I’m now at the end of the 8 port capacity.

Life is good

Remember last week when I was ranting about nothing working as it should?

Well – this weeks feels a lot more successful than the last one. It may very well be one of the nicest weeks I’ve had in IT so far.

  • The plugin system I’ve written for our PopScan Windows Client doesn’t just work, it’s also some of the shiniest code I’ve written in my life. Everything is completely transparent and thus easy to debug and extend. Once more, simplicity lead to consistency and consistency is what I’m striving for.
  • Yesterday, we finally managed to kill a long standing bug in a certain PopScan installation which seemed to manifest itself in intermittently non-working synchronization but was apparently not at all working synchronization. Now it works consistently.
  • Over the weekend, I finally got off my ass and used some knowledge in physics and and a water-level to re-balance my projector on the ceiling mount making the picture fit the screen perfectly.
  • Just now, I’ve configured two managed switches at home to carry cable modem traffic over a separate VLAN allowing me to abandon my previously whacky setup wasting a lot of cable and looking really bad. I was forced to do that because a TV connector I’ve had mounted stopped working consistently (here’s the word again).

    The configuration I thought out worked instantly and internet downtime at home (as if somebody counts) was 20 seconds or so – the TCP connections even stayed all up.

  • I finally got mt-daapd to work consistently with all the umlauts in the file names of my iTunes collection.

If this week is an indication of how the rest of the year will be, then I’m really looking forward to this.

As the title says: Life is good.

pointers, sizes

Just a small remember for myself:

If

TMyRecord = record
  pointer1: pointer;
  pointer2: pointer;
  pointer3: pointer;
  pointer4: pointer
end;
PMyRecord = ^TMyRecord;

then

  sizeof(TMyRecord) <> sizeof(PMyRecord)

So

  var rec: PMyRecord;

  rec = AllocMem(sizeof(rec));

is probably not a sensible thing to do (at least not if you intend to actually put something into that space the pointer points to).

At least it began breaking very, very soonish and consistently once TMyRecord got enough members – too bad though that I first looked at the completely wrong space.

Nothing beats the joy of seeing a very non-localized access violation go away after two hours of debugging though.

Tunnel munin nodes over HTTP

Last time I’ve talked about Munin, the one system monitoring tool I feel working well enough for me to actually bother to work with. Harsh words, I know, but the key to every solution is simplicity. And simple Munin is. Simple, but still powerful enough to do everything I would want it to do.

The one problem I had with it is that the querying of remote nodes works over a custom TCP port (4949) which doesn’t work behind firewalls.

There are some SSH tunneling solutions around, but what do you do if even SSH is no option because the remote access method provided to you relies on some kind of VPN technology or access token.

Even if you could keep a long-running VPN connection, it’s a very performance intensive solution as it requires resources on the VPN gateway. But this point is moot anyways because nearly all VPNs terminate long running connections. If re-establishing the connection requires physical interaction, then you are basically done here.

This is why I have created a neat little solution which tunnels the munin traffic over HTTP. It works with a local proxy server your munin monitoring process will connect to and a little CGI-script on the remote end.

This will cause multiple HTTP connections per query interval (the proxy uses Keep-Alive though so it’s not TCP connections we are talking about – it’s just hits in the access.log you’ll have to filter out somehow) because it’s impossible for a CGI script to keep the connection open and send data both ways – at least not if your server-side is running plain PHP which is the case in the setup I was designing this for.

Aynways – the solution works flawlessly and helps me to monitor a server behind one hell of a firewall and behind a reverse proxy.

You’ll find the code here (on GitHub as usual) and some explanation on how to use it is here.

Licensed under the MIT license as usual.