tempalias.com – development diary

After listening to this week’s Security Now! podcast where they were discussing disposeamail.com. That reminded me of this little idea I had back in 2002: Selfdestructing Email Addresses.

Instead of providing a web interface for a catchall alias, my solution was based around the idea of providing a way to encode time based validity information and even an usage counter into an email address and then check that information on reception of the email to decide whether to alias the source address to a target address or whether to decline delivery with an “User unknown” error.

This would allow you to create temporary email aliases which redirect to your real inbox for a short amount of time or amount of emails, but instead of forcing you to visit some third-party web interface, you would get the email right there where the other messages end up in: In your personal inbox.

Of course this old solution had one big problem: It required a mail server on the receiving end and it required you as a possible user to hook the script into that mailserver (also, I never managed to do just that with exim before losing interest, but by now, I would probably know how to do it).

Now. Here comes the web 2.0 variant of the same thing.

tempalias.com (yeah. it was still available. so was .net) will provide you with a web service that will allow you to create a temporary mail address that will redirect to your real address. This temporary alias will be valid only for a certain date range and/or a certain amount of email sent to it. You will be able to freely chose the date range and/or invocation count.

In contrast to the other services out there, the alias will direct to your standard inbox. No ad-filled web interface. No security problems caused by typos and no account registration.

Also, the service will be completely open source, so you will be able to run your own.

My motivation is to learn something new, which is why I am

  • writing this thing in Node.js (also, because a simple REST based webapp and a simple SMTP proxy is just what node.js was invented for)
  • documenting my progress of implementation here (which also hopefully keeps me motivated).

My progress in implementing the service will always be visible to the public on the projects GitHub page:

http://github.com/pilif/tempalias

As you can see, there’s already stuff there. Here’s what I’ve learned about today and what I’ve done today:

  • I learned how to use git submodules
  • I learned a bunch about node.js – how to install it, how it works, how module separation works and how to export stuff from modules.
  • I learned about the Express micro framework (which does exactly what I need here)
    • I learned how request routing works
    • I learned how to configure the framework for my needs (and how that’s done internally)
    • I learned how to play with HTTP status codes and how to access information about the request

What I’ve accomplished code-wise is, considering the huge amount of stuff I had plain no clue about, quite little:

  • I added the web server code that will run the webapp
  • I created a handler that handles a POST-request to /aliases
  • Said handler checks the content type of the request
  • I added a very rudimentary model class for the aliases (and learned how to include and use that)

I still don’t know how I will store the alias information. In a sense, it’s a really simple data model mapping an alias ID to its information, so it’s predestined for the cool key/value stores out there. On the other hand, I want the application to be simple and I don’t feel like adding a key/value store as a huge dependency just for keeping track of 3 values per alias.

Before writing more code, I’ll have to find out how to proceed.

So the next update will probably be about that decision.

PHP 5.3 and friends on Karmic

I have been patient. For months I hoped that Ubuntu would sooner or later get PHP 5.3, a release I’m very much looking forward to, mainly because of the addition of anonymous inner functions to spell the death of create_function or even eval.

We didn’t get 5.3 for Karmic and who knows about Lucid even (it’s crazy that nearly one year after the release of 5.3, there is still debate on whether to include it in the next version of Ubuntu that will be the current LTS release for the next four years. This is IMHO quite the disservice against PHP 5.3 adoption).

Anyways: We are in the process of releasing a huge update to PopScan that is heavily focussed on getting rid of cruft, increasing speed all over the place and increasing overall code quality. Especially the last part could benefit from having 5.3 and seeing that at this point PopScan already runs well on 5.3, I really wanted to upgrade.

In comes Al-Ubuntu-be, a coworker of mine and his awesome Debian packaging skills: Where there are already a few PPAs out there that contain a 5.3 package, Albe went the extra step and added not only PHP 5.3 but quite many other packages we depend upon that might also be useful to my readers. Packages like APC, memcache, imagick and xdebug for development.

While we can make no guarantees that these packages will be maintained heavily, they will get some security update treatment (though highly likely by version bumping as opposed to backporting).

So. If you are on Karmic (and later Lucid if it won’t get 5.3) and want to run PHP 5.3 with APC and Memcache, head over to Albe’s PPA.

Also, I’d like to take the opportunity to thank Albe for his efforts: Having a PPA with real .deb packages as opposed to just my self-compiled mess I would have done gives us a much nicer way of updating existing installations to 5.3 and even a much nicer path back to the original packages once they come out. Thanks a lot.

Introducing sacy, the Smarty Asset Compiler

We all know how beneficial to the performance of a web application it can be to serve assets like CSS files and JavaScript files in larger chunks as opposed to smaller ones.

The main reason behind this is the latency incurring from requesting a resource from the server plus the additional bandwidth of the request metadata which can grow quite large when you take cookies into account.

But knowing this, we also want to keep files separate during development to help us with the debugging and development process. We also want the deployment to not increase too much in difficulty, so we naturally dislike solutions that require additional scripts to run at deployment time.

And we certainly don’t want to mess with the client-side caching that HTTP provides.

And maybe we’re using Smarty and PHP.

So this is where sacy, the Smarty Asset Compiler plugin comes in.

The only thing (besides a one-time configuration of the plugin) you have to do during development is to wrap all your <link>-Tags with {asset_compile}….{/asset_compile} and the plugin will do everything else for you, where everything includes:

  • automatic detection of actually linked files
  • automatic detection of changed files
  • automatic minimizing of linked files
  • compilation of all linked files into one big file
  • linking that big file for your clients to consume. Because the file is still served by your webserver, there’s no need for complicated handling of client-side caching methods (ETag, If-Modified-Since and friends): Your webserver does all that for you.
  • Because the cached file gets a new URL every time any of the corresponding source files change, you can be sure that requesting clients will retrieve the correct, up-to-date version of your assets.
  • sacy handles concurrency, without even blocking while one process is writing the compiled file (and of course without corrputing said file).

sacy is released under the MIT license and ready to be used (though it currently only handles CSS files and ignores the media-attribute – stuff I’m going to change over the next few days).

Interested? Visit the project’s page on GitHub or even better, fork it and help improving it!

OpenStreetMap

The last episode of FLOSS Weekly consisted of an interview with Steve Coast from OpenStreetMap. I knew about the project, but I was of the impression that it was in its infancy both content-wise and from a technical perspective.

During the interview I learned that it’s surprisingly complete (unless, of course, you need a map of Canada it seems) and highly advanced from a technical point of view.

But what’s really interesting is the fact how terribly easy it is to contribute. For smaller edits, you just click the edit-Link and use the Flash editor to paint a road or give it a name. If you need or want to do more, then there’s a really easy to use Java based editor:

First you drag a rectangle onto a pre-rendered version of the map which will cause the server to send you the vector information consisting of that part and then you can edit whatever you want.

If you have them, you can import traces of a GPS logger to help you add roads and paths and when you are finished, you press a button and the changes get uploaded and will be visible to the public a few minutes later (though one modification I made took about an hour to arrive on the web).

When the same nodes where updated in the meantime, a really nice conflict resolution assistant will help you to resolve the conflicts.

For me personally, this has the potential to become my new after-work time sink as it combines quite many passions of mine:

  • The GPS tracking, importing and painting of maps is pure technology fun.
  • Actually being outside to generate the traces is healthy and also a lot of fun
  • Maps also are a passion of mine. I love to look at maps and I love to compare them to my mental image of the places they are showing.

And besides all that, Open Street Map is complete enough to be of real use. For biking or hiking it even trumps Google Maps by much.

Still, at least near where I live, there are many small issues that can easily be fixed.

As the different editors are really easy to use, fixing these issues is a lot of fun and I’m totally seeing myself cleaning out all small mistakes I come across or even adding stuff that’s missing. After all, this also provides me with a very good reason to visit the places where I grew up to complete some parts.

The whole concept behind being able to update a map by just a couple of mouse clicks is very compelling too as it finally gives us the potential to have really accurate maps in a very timely fashion. For example: Last October, one of the roads near my house closed and just recently the tracks of the Forchbahn were moved a bit.

Just today I added these changes to OpenStreetMap and now OSM is the only publically available map that correctly shows the traffic situation. And all that with 15 minutes of easy but interesting work.

For those interested, my Open Street Map user profile is, of course, pilif.

PostgreSQL 8.4

Like a clockwork, about one year after the release of PostgreSQL 8.3, the team behind the best database on this world did it again and released PostgreSQL 8.4, the latest and greatest in a long series of awesomeness.

Congratulations to everyone involved and might you have the strength to continue to improve your awesome piece of work.

For me, the hightlights of this new release are

  • parallel restore: I just tried this out and restoring a dump that usually took around 40 minutes (in standard sql/text format) now takes 5 minutes.
  • The improvements to psql usability just make it even clearer that psql isn’t just a command line database tool, but that it’s one of the best interfaces to access the data and administer the server. psql hands-down beats whatever database GUI tool I have seen so far.
  • truncate table reset identity is very useful during development
  • no more max_fsm_pages makes maintaining the database even easier and removes one variable to keep track of.

Thanks again for yet another awesome release.

New MacMini (early 09) and Linux

The new MacMinis that were announced this week come with a Firewire 800 port which was reason enough for me to update shion yet again (keeping the host name of course).

All my media she’s serving to my various systems is stored on a second generation Drobo which is currently connected via USB2, but has a lingering FW800 port.

Of course the upgrade to FW800 will not double the transfer rate to and from the drobo, but it should increase it significantly, so I went ahead and got one of the new Minis.

As usual, I entered the Ubuntu (Intrepid) CD, hold c while turning the device on and completed the installation.

This left the Mini in an unbootable state.

It seems that this newest generation of Mac Hardware isn’t capable of booting from an MBR partitioned harddrive. Earlier Macs complained a bit if the harddrive wasn’t correctly partitioned, but then went ahead and booted the other OS anyways.

Not so much with the new boxes it seems.

To finally achieve what I wanted I had to do the following complicated procedure:

  1. Install rEFIt (just download the package and install the .mpkg file)
  2. Use the Bootcamp assistant to repartition the drive.
  3. Reboot with the Ubuntu Desktop CD and run parted (the partitioning could probably be accomplished using the console installer, but I didn’t manage to do it correctly).
  4. Resize the FAT32-partition which was created by the Bootcamp partitioner to make room at the end for the swap partition.
  5. Create the swap partition.
  6. Format the FAT32-partition with something useful (ext3)
  7. Restart and enter the rEFIt partitioner tool (it’s in the boot menu)
  8. Allow it to resync the MBR
  9. Insert the Ubuntu Server CD, reboot holding the C key
  10. Install Ubuntu normally, but don’t change the partition layout – just use the existing partitions.
  11. Reboot and repeat steps 7 and 8
  12. Start Linux.

Additionally, you will have to keep using rEFIt as the boot device control panel item does not recognize the linux partitions any more, so can’t boot from them.

Now to find out whether that stupid resistor is still needed to make the new mini boot headless.

All-time favourite tools – update

It has been more than four years since I’ve last talked about my all-time favourite tools. I guess it’s time for an update.

Surprisingly, I still stand behind the tools listed there: My love for Exim is still un-changed (it just got bigger lately – but that’s for another post). PostgreSQL is cooler than ever and powers PopScan day-in, day-out without flaws.

Finally, I’m still using InnoSetup for my Windows Setup programs, though that has lost a bit of importance in my daily work as we’re shifting more and more to the web.

Still. There are two more tools I must add to the list:

  • jQuery is a JavaScript helper libary that allows you to interact with the DOM of any webpage, hiding away browser incompatibilities. There are a couple of libraries out there which do the same thing, but only jQuery is such a pleasure to work with: It works flawlessly, provides one of the most beautiful APIs I’ve ever seen in any library and there are tons and tons of self-contained plug-ins out there that help you do whatever you would want to on a web page.
    jQuery is an integral part of making web applications equivalent to their desktop counterparts in matters of user interface fluidity and interactivity.
    All while being such a nice API that I’m actually looking forward to do the UI work – as opposed to the earlier days which can most accurately be described as UI sucks.
  • git is my version control system of choice. There are many of them out there in the world and I’ve tried the majority of them for one thing or another. But only git combines the awesome backwards-compatibility to what I’ve used before and what’s still in use by my coworkers (SVN) with abilities to beautify commits, have feature branches, very high speed of execution and very easy sharing of patches.
    No single day passes without me using git and running into a situation where I’m reminded of the incredible beauty that is git.

In four years, I’ve not seen one more other tool I’ve as consistenly used with as much joy as git and jQuery, so those two certainly have earned their spot in my heart.