Announcing tempalias.com

Have you ever been in the situation where you had to provide a web service with an email address to get that confirmation email, full well knowing that you will not only get that, but also “important announcements” and “even more important product information”?

Wouldn’t it be nice if they could just send you the confirmation link but nothing more?

That’s possible now!

Head over to

tempalias.com

type the email address that should receive the confirmation mail, specify how many mails you want to receive and for how many days. Then hit the button and – boom – there’s your unique email address that you can provide to the service. Once the usage or time limit has been met, no more mail to that alias will be accepted.

tempalias.com is a fun-project of mine and also a learning experience. tempalias is written in node.js, a framework I had no prior experience with (but a whole lot of curiosity for). This is why I not only created the site, but I also documented my steps along the way. Here are the links to the various postings in chronological order (oldest first – I bolded the ones that contain useful information above just reporting on progress or bugs):

  1. development diary
  2. another day
  3. persistence
  4. SMTP and design
  5. config file, SMTP cleanup, beginnings of a server
  6. the cake is a lie
  7. rewrites
  8. sysadmin work
  9. learning CSS
  10. debriefing

If you want to get in touch with me to report bugs, or ask questions, to rant or maybe to send a patch to, please send me an email to y8b3@tempalias.com – erm… no. just kidding (you can try sending an email there – it’s good for one day and one email – so good luck). Send it to pilif@gnegg.ch or contact me on twitter @pilif.

tempalias.com – debriefing

This is the last part of the development diary I was keeping about the creation of a new web service in node.js. You can read the previous installment here.

It’s done.

The layout is finished, the last edges too rough for pushing the thing live are smoothed. tempalias.com is live. After coming really close to finishing the thing yesterday (hence the lack of a posting here – I was too tired when I had to quit at 2:30am) last night, now I could complete the results page and add the needed finishing touches (like a really cool way of catching enter to proceed from the first to the last form field – my favorite hidden feature).

I guess it’s time for a little debriefing:

All in all, the project took a time span of 17 days to implement from start to finish. I did this after work and mostly during weekdays and sundays, so it’s actually 11 days in which work was going on (I also was sick two days). Each day I worked around 4 hours, so all in all, this took around 44 hours to implement.

A significant part of this time went into modifications of third party libraries, while I tried to contact the initial authors to get my changes merged upstream:

  • The author of node-smtp isn’t interested in the SMTP daemon functionality (that wasn’t there when I started and is now completed)
  • The author of redis-node-client didn’t like my patch, but we had a really fruitful discussion and node-redis-client got a lot better at handling dropped connection in the process.
  • The author of node-paperboy has merged my patch for a nasty issue and even tweeted about it (THANKS!)

Before I continue, I want to say a huge thanks to fictorial on github for the awesome discussion I was allowed to have with him about node-redis-client’s handling of dropped connections. I’ve enjoyed every word I was typing and reading.

But back to the project.

Non-third-party code consists of just 1624 lines of code (using wc -l, so not an accurate measurement). This doesn’t factor in the huge amount of changes I made to my fork of node-smtp the daemon part of which was basically non-existant.

Overall, the learnings I made:

  • git and github are awesome. I knew that beforehand, but this just cemented my opinion
  • node.js and friends are still in their infancy. While node removes previously published API on a nearly daily basis (it’s mostly bug-free though), none of the third-party libraries I am using were sufficiently bug-free to use them without change.
  • Asynchronous programming can be fun if you have closures at your disposal
  • Asynchronous programming can be difficult once the nesting gets deep enough
  • Making any variable not declared with var global is the worst design decision I have ever seen in my life especially in node where we are adding concurrency to the mix)
  • While it’s possible (and IMHO preferrable) to have a website done in just RESTful webservices and static/javascript frontend, sometimes just a tiny little bit of HTML generation could be useful. Still. Everything works without emitting even a single line of dynamically generated HTML code.
  • Node is crazy fast.

Also, I want to take the opportunity and say huge thanks to:

  • the guys behind node.js. I would have had to do this in PHP or even rails (which is even less fitting than PHP as it provides so much functionality around generating dynamic HTML and so little around pure JSON based web services) without you guys!
  • Richard for his awesome layout
  • fictorial for redis-node-client and for the awesome discussion I was having with him.
  • kennethkalmer for his work on node-smtp even though it was still incomplete – you lead me on the right tracks how to write an SMTP daemon. Thank you!
  • @felixge for node-paperboy – static file serving done right
  • The guys behind sammy – writing fully JS based AJAX apps has never been easier and more fun.

Thank you all!

The next step will be marketing: Seing this is built on node.js and an actually usable project – way beyond the usual little experiments, I hope to gather some interest in the Hacker community. Seing it also provides a real-world use, I’ll even go and try to submit news about the project on more general outlets. And of course on the Security Now! feedback page as this is inspired by their episode 242.

tempalias.com – development diary

After listening to this week’s Security Now! podcast where they were discussing disposeamail.com. That reminded me of this little idea I had back in 2002: Selfdestructing Email Addresses.

Instead of providing a web interface for a catchall alias, my solution was based around the idea of providing a way to encode time based validity information and even an usage counter into an email address and then check that information on reception of the email to decide whether to alias the source address to a target address or whether to decline delivery with an “User unknown” error.

This would allow you to create temporary email aliases which redirect to your real inbox for a short amount of time or amount of emails, but instead of forcing you to visit some third-party web interface, you would get the email right there where the other messages end up in: In your personal inbox.

Of course this old solution had one big problem: It required a mail server on the receiving end and it required you as a possible user to hook the script into that mailserver (also, I never managed to do just that with exim before losing interest, but by now, I would probably know how to do it).

Now. Here comes the web 2.0 variant of the same thing.

tempalias.com (yeah. it was still available. so was .net) will provide you with a web service that will allow you to create a temporary mail address that will redirect to your real address. This temporary alias will be valid only for a certain date range and/or a certain amount of email sent to it. You will be able to freely chose the date range and/or invocation count.

In contrast to the other services out there, the alias will direct to your standard inbox. No ad-filled web interface. No security problems caused by typos and no account registration.

Also, the service will be completely open source, so you will be able to run your own.

My motivation is to learn something new, which is why I am

  • writing this thing in Node.js (also, because a simple REST based webapp and a simple SMTP proxy is just what node.js was invented for)
  • documenting my progress of implementation here (which also hopefully keeps me motivated).

My progress in implementing the service will always be visible to the public on the projects GitHub page:

http://github.com/pilif/tempalias

As you can see, there’s already stuff there. Here’s what I’ve learned about today and what I’ve done today:

  • I learned how to use git submodules
  • I learned a bunch about node.js – how to install it, how it works, how module separation works and how to export stuff from modules.
  • I learned about the Express micro framework (which does exactly what I need here)
    • I learned how request routing works
    • I learned how to configure the framework for my needs (and how that’s done internally)
    • I learned how to play with HTTP status codes and how to access information about the request

What I’ve accomplished code-wise is, considering the huge amount of stuff I had plain no clue about, quite little:

  • I added the web server code that will run the webapp
  • I created a handler that handles a POST-request to /aliases
  • Said handler checks the content type of the request
  • I added a very rudimentary model class for the aliases (and learned how to include and use that)

I still don’t know how I will store the alias information. In a sense, it’s a really simple data model mapping an alias ID to its information, so it’s predestined for the cool key/value stores out there. On the other hand, I want the application to be simple and I don’t feel like adding a key/value store as a huge dependency just for keeping track of 3 values per alias.

Before writing more code, I’ll have to find out how to proceed.

So the next update will probably be about that decision.

Google Buzz, Android and Google Apps Accounts

I was looking at the Google Android Maps Application that is now providing integrated Google Buzz support, showing buzzes directly on the map and allowing you to buzz (around where I live and work, there has been a tremendous uptake of Google Buzz which makes this really compelling).

However, there’s a little peculiarity about the Android maps application: If your main Google Account you configured (that’s the first one you configure) on the phone is a Google Apps account, Maps will use that for buzz-support (apparently, there’s already some kind of infrastructure for inter-company Buzzing in place). This means that you would only see buzzes from other people in your domain and, because there’s no official support for this out there, only if they are also using an Android phone.

“Mittelpraktisch” as I would say in German.

The obvious workaround is to configure your private gmail account to be your primary account (this is only possible by factory-resetting your device by the way), but this has some disadvantages, mainly the fact that the calendar on the Android  phones only supports syncing with the primary account and as it happens, usually it’s the work-calendar (the Apps one) you want synchronized; not the private one (that lingers unused in my case).

To work around this issue, share your work calendar with your private Google account.

Unfortunately, I couldn’t do that as I’m posting this, because the default in the domain configuration is to not allow this. Thankfully, I’m that domain’s administrator, so I could change it (small company. remember.), but it seems to take a while to propagate into the calendar account.

I’ll post more as my investigation turns out more, though it is my gut feeling that this mess will solve itself as Google fixes their Maps application to not use that phantom corporate buzz account.

Introducing sacy, the Smarty Asset Compiler

We all know how beneficial to the performance of a web application it can be to serve assets like CSS files and JavaScript files in larger chunks as opposed to smaller ones.

The main reason behind this is the latency incurring from requesting a resource from the server plus the additional bandwidth of the request metadata which can grow quite large when you take cookies into account.

But knowing this, we also want to keep files separate during development to help us with the debugging and development process. We also want the deployment to not increase too much in difficulty, so we naturally dislike solutions that require additional scripts to run at deployment time.

And we certainly don’t want to mess with the client-side caching that HTTP provides.

And maybe we’re using Smarty and PHP.

So this is where sacy, the Smarty Asset Compiler plugin comes in.

The only thing (besides a one-time configuration of the plugin) you have to do during development is to wrap all your <link>-Tags with {asset_compile}….{/asset_compile} and the plugin will do everything else for you, where everything includes:

  • automatic detection of actually linked files
  • automatic detection of changed files
  • automatic minimizing of linked files
  • compilation of all linked files into one big file
  • linking that big file for your clients to consume. Because the file is still served by your webserver, there’s no need for complicated handling of client-side caching methods (ETag, If-Modified-Since and friends): Your webserver does all that for you.
  • Because the cached file gets a new URL every time any of the corresponding source files change, you can be sure that requesting clients will retrieve the correct, up-to-date version of your assets.
  • sacy handles concurrency, without even blocking while one process is writing the compiled file (and of course without corrputing said file).

sacy is released under the MIT license and ready to be used (though it currently only handles CSS files and ignores the media-attribute – stuff I’m going to change over the next few days).

Interested? Visit the project’s page on GitHub or even better, fork it and help improving it!

Snow Leopard and PHP

Earlier versions of Mac OS X always had pretty outdated versions of PHP in their default installation, so what you usually did was to go to entropy.ch and fetch the packages provided there.

Now, after updating to Snow Leopard you’ll notice that the entropy configuration has been removed and once you add it back in, you’ll see Apache segfaulting and some missing symbol errors.

Entropy has not updated the packages to snow leopard yet, so you could have a look at PHP that came with stock snow leopard: This time it’s even bleeding edge: Snow Leopard comes with PHP 5.3.0.

Unfortunately though, some vital extensions are missing, most notably for me, the PostgeSQL extension.

This time around though, Snow Leopard comes with a functioning PHP development toolset, so there’s nothing stopping you to build it yourself, so here’s how to get the official PostgreSQL extension working on Snow Leopard’s stock php:

  1. Make sure that you have installed the current Xcode Tools. You’ll need a working compiler for this.
  2. Make sure that you have installed PostgreSQL and know where it is on your machine. In my case, I’ve used the One-click installer from EnterpriseDB (which persisted the update to 10.6).
  3. Now that Snow Leopard uses a full 64bit userspace, we’ll have to make sure that the PostgreSQL client library is available as a 64 bit binary – or even better, as an universal binary.Unfortunately, that’s not the case with the one-click installer, so we’ll have to fix that first:
    1. Download the sources of the PostgreSQL version you have installed from postgresql.org
    2. Open a terminal and use the following commands:
      % tar xjf postgresql-[version].tar.bz2
      % cd postgresql-[version]
      % CFLAGS="-arch i386 -arch x86_64" ./configure --prefix=/usr/local/mypostgres
      % make

      make will fail sooner or later because you the postgres build scripts can’t handle building an universal binary server, but the compile will progress enough for us to now build libpq. Let’s do this:

      % make -C src/interfaces
      % sudo make -C src/interfaces install
      % make -C src/include
      % sudo make -C src/include install
      % make -C src/bin
      % sudo make -C src/bin install
  4. Download the php 5.3.0 source code from their website. I used the bzipped version.
  5. Open your Terminal and cd to the location of the download. Then use the following commands:
    % tar -xjf php-5.3.0.tar.bz2
    % cd php-5.3.0/ext/pgsql
    % phpize
    % ./configure --with-pgsql=/usr/local/mypostgres
    % make -j8 # in case of one of these nice 8 core macs :p
    % sudo make install
    % cd /etc
    % cp php.ini-default php.ini
  6. Now edit your new php.ini and add the line extension=pgsql.so

And that’s it. Restart Apache (using apachectl or the System Preferences) and you’ll have PostgreSQL support.

All in all this is a tedious process and it’s the price us early adopters have to pay constantly.

If you want an honest recommendation on how to run PHP with PostgreSQL support on Snow Leopard, I’d say: Don’t. Wait for the various 3rd party packages to get updated.

802.11n, Powerline and Sonos

I decided to have a look into the networking setup for my bedroom as lately, I was getting really bad bandwidth.

Earlier, while unable to stream 1080p into my bedrom, I was able to watch 720p, but lately even that has become choppy at best.

In my bedroom, I was using a Sonos Zone Player 100 connected via Ethernet to a Devolo A/V 200MBit power line adapter.

I have been using the switch integrated into the zone player to connect the bedrom MacMini media center and the PS3 to the network. The idea was that powerline will provide better bandwidth than WiFi, which it initially seemed to do, but as I said, lately, this system became really painful to use.

Naturally I had enough and wanted to look into other options.

Here’s a quick list of my findings:

  • The Sonos ZonePlayer actually acts as a bridge. If one player is connected via Ethernet, it’ll use its mesh network to wirelessly bridge that Ethernet connection to the switch inside the Sonos. I’m actually deeply astonished that I even got working networking with my configuration.
  • Either my Devolo adaptor is defective or something strange is going on in my power line network – a test using FTP never yielded more than 1 MB/s throughput which explains why 720p didn’t work.
  • While still not a ratified standard, 802.11n, at least as implemented by Apple works really well and delivers constant 4 MB/s throughput in my configuration.
  • Not wanting to risk cross-vendor incompatibilities (802.11n is not ratified after all), I went the Apple Airport route, even though there probably would have been cheaper solutions.
  • Knowing that bandwidth rapidly decreases with range, I bought one AirPort Extreme Base Station and three AirPort Expresses which I’m using to do nothing but extend the 5Ghz n network.
  • All the AirPort products have a nasty constantly lit LED which I had to cover up – this is my bedroom after all, but I still wanted line of sight to optimize bandwidth. There is a configuration option for the LED, but it only provides two options: Constantly on (annoying) and blinking on traffic (very annoying).
  • While the large AirPort Extreme can create both a 2.4 GHz and a 5 GHz network, the Express ones can only extend either one of them!

This involved a lot of trying out, changing around configurations and a bit of research, but going from 0.7 MB/s to 4 MB/s in throughput certainly was worth the time spent.

Also, yes, these numbers are in Megabytes unless I’m writing MBits in which case it’s Megabits.

Playing Worms Armageddon on a Mac

Last weekend, I had a real blast with the Xbox 360 Arcade version of worms. Even after so many years, this game still rules them all, especially (if not only) in multiplayer mode.

The only drawback of the 360 version is the lack of weapons.

While the provided set is all well, the game is just not the same without the Super Banana Bomb or the Super Sheep.

Worms Screenshot

So this is why I looked for my old Worms Armageddon CD and tried to get it to work on todays hardware.

Making it work under plain Vista was easy enough (get the latest beta patch for armageddon, by the way):

Right-Click the Icon, select the compatibility tab, chose Windows XP, Disable Themes and Desktop composition and run the game with administrative privileges.

You may get away with not using one option or the other, but this one worked consistently.

To be really useful though, I wanted to make the game run under OS X as this is my main environment and I really dislike going through the lengthy booting process that is bootcamp.

I tried the various virtualization solutions around – something that should work seeing that the game doesn’t really need much in terms of hardware support.

But unfortunately, this was way harder than anticipated:

  • The initial try was done using VMWare Fusion which looked very good at first, but failed miserably later on: While I was able to launch (and actually use) the games frontend, the actual game was a flickery mess with no known workaround.
  • Parallels failed by displaying a black menu. It was still clickable, but there was nothing on the screen but blackness and a white square border. Googling around a bit led to the idea to set SlowFrontendWorkaround in the registry to 0 which actually made the launcher work, but the game itself crashed consistenly without error message.

In the end, I’ve achieved success using VirtualBox. The SlowFrontendWorkaround is still needed to make the launcher work and the mouse helper of the VirtualBox guest tools needs to be disabled (on the Machine menu, the game still runs with the helper enabled, but you won’t be able to actually control the mouse pointer consistently), but after that, the game runs flawlessly.

Flickerless and with a decent frame rate. And with sound, of course.

To enable the workaround I talked about, use this .reg file.

Now the slaughter of worms can begin :-)

Double-blind cola test

The final analysis

Two of my coworkers decided today after lunch that it was time to solve the age-old question: Is it possible to actually detect different kinds of cola just by tasting them.

In the spirit of true science (and a hefty dose of Mythbusters), we decided to do this the right way and to create a double blind test. The idea is that not only the tester has to not know the different test subjects, but also the person administering the test to make sure that the tester is not influenced in any way.

So here’s what we have done:

  1. We bought 5 different types of cola: A can of coke light, a can of standard coke, a PET bottle of standard coke, a can of coke zero and finally, a can of the new Red Bull cola (in danger of spoiling the outcome: eek).
  2. We marked five glasses with numbers from 1 to 5 at the bottom.
  3. We asked a coworker not taking part in the actual test to fill the glasses with the respective drink.
  4. We put the glasses on our table in random order and designated each glasses position with letters from A to E.
  5. One after another, we drank the samples and noted which glass (A-E) we thought to contain what drink (1-5). As to not influence ourselves during the test, the kitchen area was off-limits for everyone but the test subject and each persons results where to be kept secret until the end of the test.
  6. We compared notes.
  7. We checked the bottom of the glasses to see how we fared.

The results are interesting:

  • Of the four people taking part in the test, all but one person guessed all types correctly. The one person who failed wasn’t able to correctly distinguish between bottled and canned standard coke.
  • Everyone instantly recognized the Red Bull Cola (no wonder there, it’s much brighter than the other contenders and it smells like cough medicine)
  • Everyone got the coke light and zero correctly.
  • Although the tester pool was way too small, it’s interesting that 75% of the testers were able to discern the coke in the bottle from the coke in the can – I would not have guessed that, but then, there’s only a 50% chance to be wrong on this one – we may all just have been lucky – at least I was, to be honest.

Fun in the office doing pointless stuff after lunch, I guess.

New MacMini (early 09) and Linux

The new MacMinis that were announced this week come with a Firewire 800 port which was reason enough for me to update shion yet again (keeping the host name of course).

All my media she’s serving to my various systems is stored on a second generation Drobo which is currently connected via USB2, but has a lingering FW800 port.

Of course the upgrade to FW800 will not double the transfer rate to and from the drobo, but it should increase it significantly, so I went ahead and got one of the new Minis.

As usual, I entered the Ubuntu (Intrepid) CD, hold c while turning the device on and completed the installation.

This left the Mini in an unbootable state.

It seems that this newest generation of Mac Hardware isn’t capable of booting from an MBR partitioned harddrive. Earlier Macs complained a bit if the harddrive wasn’t correctly partitioned, but then went ahead and booted the other OS anyways.

Not so much with the new boxes it seems.

To finally achieve what I wanted I had to do the following complicated procedure:

  1. Install rEFIt (just download the package and install the .mpkg file)
  2. Use the Bootcamp assistant to repartition the drive.
  3. Reboot with the Ubuntu Desktop CD and run parted (the partitioning could probably be accomplished using the console installer, but I didn’t manage to do it correctly).
  4. Resize the FAT32-partition which was created by the Bootcamp partitioner to make room at the end for the swap partition.
  5. Create the swap partition.
  6. Format the FAT32-partition with something useful (ext3)
  7. Restart and enter the rEFIt partitioner tool (it’s in the boot menu)
  8. Allow it to resync the MBR
  9. Insert the Ubuntu Server CD, reboot holding the C key
  10. Install Ubuntu normally, but don’t change the partition layout – just use the existing partitions.
  11. Reboot and repeat steps 7 and 8
  12. Start Linux.

Additionally, you will have to keep using rEFIt as the boot device control panel item does not recognize the linux partitions any more, so can’t boot from them.

Now to find out whether that stupid resistor is still needed to make the new mini boot headless.