Sorry. Connection’s down

We all know it: Network connections are unreliable. This is ok and I have no problem whatsoever with that. Connections can go down. Nothing serious, nothing special.

There are multiple ways how software can let you know that a connection dropped:

  • Crash. This is the second worst way to handle it. At least the user knows what to do: Restart the application and it will (hopefully) work again.
  • Connection failed: Software caused the connection to abort. Somewhat incorrect, too much information, a bit scary for the enduser, but common for many Winsock-Applications as this is the default error-message you can ask windows to provide you with given a specific error-code
  • Sorry. The connection somehow went down. Should I try to connect again?. Correct, not technical, not scary. This is how I try to explain it to my users.

Well… and then there’s the IBM DB2 client:

SQL30081N A communication error has been detected. Communication protocol being used: “TCP/IP”. Communication API being used: “SOCKETS”. Location where the error was detected: “3.134.144.87”. Communication function detecting the error: “send”. Protocol specific error code(s): “104”, “*”, “0”. SQLSTATE=08001

What the hell?

Firefo^WDeer Park Alpha 1

Yesterday, a developers preview of Mozilla 1.1 was released. To not confuse end users, the’ve called it Deer Park Alpha 1. You won’t see (m)any Firefox-References in the UI.

As always on a major release, extensions and themes tend to break. And as always, you can try to patch (change the MaxVersion) the install.rdf-file in the XPI-file (it’s just a zip-archive) and try to see whether the extension still works. Here’s what I got so far:

  • Installing DeerPark Alpha 1 breaks Firefox. You basically get an unstyled white screen when you start Firefox. This is not great, but unavoidable I suppose.
  • You can patch up the Qute-Theme and it mostly works (install it with this script). The preferences-screen looks funny though (it’s mostly transparent). So if you don’t change any preferences, you can go with qute.
  • The Web Developer toolbar continues to work without patching, though with limited functionality.
  • Download Manager Tweak works as always, though you can’t access its preferences-screen from the preferences dialog (from the extensions window works fine though)
  • Feed Your Reader can be patched up. It does not work any more though
  • Greasemonkey can be patched up. It does not work though. Throws an error when trying to install an user script.
  • Platypus seems to work fine, though it’s useless as Greasmonkey does not.
  • Adblock can be patched and actually continues to work.

This scenario underlines my one problem I’m having with Firefox: They seem to be unable to provide a stable extensions API. On one hand this is a good thing: Cleaning up the API here and then helps getting the product clean and fast. On the other hand, this is bad for the end user. What do you do if your favourite plugin stops being developed and a new browser comes out? Either you don’t use the plugin any more, or you stay with the old release of the browser (I’d do that if adblock would stop working – for example).

But you can’t stay on old versions. Sometime in the future, a security problem will show up. If you are unlucky enough, the older version is not supported any more. So the choice is: Not using the plugin or surfing with an insecure browser.

That’s why I have so few extensions installed. Those I have are popular enough to give me some guarantees that they will be updated. Those I’d like to install that seem to come without the guarantees, I won’t install so I don’t get used to having them available.

This is not the best situation ever. The people at Mozilla should try to stabilize the API somewhat as soon as possible. And they should try to be backward compatible at least for two bigger releases or so.

I will now go and look for people responsible for all those extensions and will try to report them my findings. And hope for the best.

World of Warcraft: Language Packs

Well. Back here, I have begged Blizzard to release a language-pack for WoW, as I had real difficulties playing on an english server with my german version (which I helped later with a semi-legal solution)

Today, they have released language-packs called ELP which do exactly what I asked for in my blog entry

Now if the installation would not take that long, I’d happily remove my semi-legal setup and replace it with the original again.

Thank you so much for seeing and solving this problem, Blizzard!

31337 OOP code?

In the current issue of php | architect, there’s an article about “enterprise ready” session management. While it provides a nice look about how to structure your application (besides the capital mistake of endorsing a multiple-entry application structure – but I’ll save that for another post) and about some design-patterns, I have one big objection to the article: It’s basically saying that the $_SESSION-things in PHP are not enterprise-ready. The article names three reasons:

  1. It is not OOP enough
  2. The Session-ID is guessable
  3. The storage location for the session-data does not work with load balancers

The article then goes fruther and writes a complete replacement for PHP’s session API

Now. Le’ts have a look those points:

Point 3 is valid. If you load balancer cannot guarantee that each subsequent request from a user goes to the same server, /tmp is not a good place to store session data. What the article does not tell you is that most load balancers actually do make that guarantee. Reading the session-data from a file, unserializing it, using it, serializing it, storing it to a file probably is faster than doing the same thing with a database. Maybe you should do some testing and then deceide – at least when you have the real enterprise-grade-load balancers at your disposal.

Point 2 is also somewhat true, but the workaround provided by the article is not any better than what PHP already does. I especially dislike taking a hash of the first two octets of the IP-adress for protection against session spoofing. Hey. 2 octets of IP-range are not checked. This are 65536 addresses. Say I want to spoof sessions on your site, instead of those 4 billions of users I only have 65 thousand to try it with, but let’s say even only 1% of the users in said range do some online financial transactions on your site, it’s worth it for me. I just make an accaount at a particular ISP and try out my range.

It’s unfair to say PHP’s session ID generation is weak because it uses the systems time (amongst other things) and then create a replacement algortihm using the systems time (amongst other things).

The idea with the second ID is somewhat valid, but does not protect at all against network-based attacks (listening on the network and sending a valid request)

My biggest concern – the one that actually made me write this – is point 1. Tell me: What’s better at

 HTTPRequest::getSession()->getValue('gnegg');

than

 $_SESSION['gnegg'];

As I see it, the first version has three distinct disadvantages:

  • Depending on the state of PHP’s optimizer, this involves two function calls (in PHP userland code – and maybe countless others in the backend) per variable you query (and with the proposed implementation one additional database query(!)). Function calls are expensive. This is inperformant. Not with two to three queries but with maybe 100 or 1000 per second
  • The second method is the one documented and endorsed by PHP. Any coder you will find will know what it means, and how to work with it. Whenever you hire a new coder, he immediately will understand your session management code and will be able to concentrate on the business logic. The first method does not have this advantage. It’s just another hurdle for the coder to take before being able to be productive. A needless hurdle
  • It’s more code. More to type. More work to do. Thus inefficient for your programmers.

Saying the first one is better because it’s more OOP is like saying “I am more 31337 than you because I’m using Windows”, or “rogues in world of warcraft are more 31337 than warriors” or … take your pick (a phrase involving vi and emacs springs to mind).

So. From the three points the author of the article had to present, only one, maybe two are valid. Does this justify dumping the whole session management functionality in PHP? No it does not. Dumping ready-to-use funcationality is always bad. Especially if the funtionality you want to dump is extendable (and thus fixable for your purpose).

The PHP session management can be customized! Just have a look at the manual. There is session.save_handler, session.serialize_handler. There’s even session.entropy_file

So after all, another of those people trying to be god-like by writing about the enterprise without really knowing what it means. The java world is full of such individuals. And now PHP is getting them too. The price for being known? Maybe.

The most pleasant installation experience

The most pleasant experience I ever had when installing a webbased application was when I was installing Gallery 2 Beta 1. I’ve never seen such a polished assistant. I’ve never seen a webbased installer work so well as the Gallery one did.

While I was really, really happy with this, I have not blogged about it (shame on me).

But now that I have updated to Beta 3, this really, really is cause for a blog entry.

The update-process uses the same assistant-type as the installer and is just as pleasant and unproblematical as the installation process. Call your gallery, read, click “next”, repeat. Done. Fast, pleasant and error-free.

Congratulations to the gallery team. You rock!

On and the gallery is here

Lots of fun with OpenVPN

OpenVPN may seem to you as being “just another VPN solution”. And maybe you are right.

However, OpenVPN has some distinct advantages over other VPN-solution that makes it quite interesting for deployment:

  • NAT traversal. OpenVPN uses plain old UDP-Packages as a transport medium. Every NAT router on this world can forward them correctly out-of-the-box. If not, create the usual port-forwarding rule and be done with it. If that fails too (whyever it could fail), use the TCP-protocol.
  • Ease-of-use: Install, create two certificates, use the VPN. It’s as easy as 1-2-3
  • Designed with small installations in mind. OpenVPN is not a big slow beast like IPSec for example. While it may not be as secure, it does not have all the problems associated with IPSec.
  • User-Space. OpenVPN runs completely in userspace (while using the TUN device provided by the kernel). This way the installation is non-critical and does require no reboots. Updates in case of security problems do not require reboots either.

So after this unexpected praise: What brings me to writing this posting?

Well. I’ve just deployed one of the coolest things on earth: Using OpenVPN, I have connected my home network to the network in the office. Both ends see each other and allow for direct connections. I’m not only able to print on the offices printers from home (which admittedly is as useless as it is cool), but I’m also able to – for example – stream music from home to the office over a secured channel. All using straight IP connections without any NAT-trickery or other things.

Actually not even one port is forwarded through my NAT-gateway (a ZyAir B-2000 – as the Airport-Basestation does not allow for static routes (see below), I was forced to cross-grade).

I already had some of this functionality using my previously deployed PPTP-setup, though this had some disadvantages:

  • Flacky support in Linux. Maintaining the beast across windows- and mac versions was not easy as something always broke on new versions.
  • Suboptimal security. You know: PPTP has flaws – quite like WEP. Though I’ve tried to work around them by using very very long passwords.
  • Suboptimal usability: When I wanted to connec to the office, I had to dial into the VPN, so user interaction was needed. Additionally, the default-gateway was redirected (I could have turned that off), so all open TCP connections got disconnected when I dialled.

My current solution does not have any of those problems (I don’t know about the security of course – no one does. For now, OpenVPN is said to be secure): No dialling is required, no problems with changing software-versions are to be expected (as it runs on a dedicated router which I don’t intend on changing), and I don’t have to dial in. The default gateway is not changed either of course, so the usual internet-connections go out directly. This way I’m unaffected from the office’s suboptimal upstream of 65KBytes/s (unless I use services from the office of course – but this is unavoidable).

So. What did I do?

At the very first, I had to recompile the kernel on the server side once. I have not included TUN-support when I created my .config last year. After this, emerge openvpn was all that was needed. I kept the default configuration-file somewhat intact (install with the “examples” USE-flag and use the example-server.conf), but made some minor adjustments:

local x.x.x.x
push "route 192.168.2.0 255.255.255.0"
client-config-dir ccd
route 192.168.3.0 255.255.255.0
#push "redirect-gateway"

(just the changed lines)

and the /etc/openvpn/ccd/Philip_Hofstetter:

iroute 192.168.3.0 255.255.255.0

Now, what does this configuration do?

  • Bind to the external interface only. This has only cosmetical reasons
  • Push the route to the internal network to the client. Using the default configuration, all OpenVPN-Addresses are in the 10.8.0.0 network which allows me for nice firewall-settings on the server-side. The 192.168.2.0/24 network is our office-network
  • Tell OpenVPN that there are some client-specific configuration options to reach the 192.168.3.0/24 net which is my home network
  • Comment out the option to let OpenVPN set the default gateway. We really don’t want all the traffic in my home net going through the office

Then we create this client-configuration file. It’s named after the CN you use in the SSL-certificate, while replacing spaces with underscores. You can see the correct value by setting up everything and then connecting to the server while watching the logfile.

In the client specific configuration-file we confirm the additional route we want to create.

The configuration file on the client router is unchanged from the default.

The only thing you need now is the SSL-certificate. Create one for the server and more for each client. I won’t go into this in this article as it’s somewhat complicated on itself, but you’ll find lots of guides out there.

I used our companies CA to create the certificates for both the server and the client.

After this, it’s just a matter of /etc/init.d/openvpn start on both machines (the path to the certificates/keys in the configuration files must match your created files of course).

Just watch out for the routing: On the server I had to change nothing as the server was already entered as default gateway on all the clients in the office network.

In the client network, I had to do some tweaking as the default-gateway was set to the Airport Basestation, which (understandably) knew nothing about the 192.168.2.0/24 network, so was unable to route the IP-packets to the VPN-gateway in the internal network (my Mac Mini).

Usually you solve that by installing a static route on the default gateway in your network. Unfortunately, this is not possible on an airport basestation. A problem I have solved by replacing it with a ZyAir B-2000 from Zyxel which allows for setting static routes.

On that new access-point I created a route equivalent to this unix-command:

route add -net 192.168.2.0 netmask 255.255.255.0 gw 192.168.3.240

Where 192.168.3.240 is the address of my Mac Mini on which OpenVPN was running as client.

Then I issued “echo 1 > /proc/sys/net/ipv4/ip_forward” on the Mac Mini to allow the packets to be forwarded.

So whenever I send packets to one of the offices computers – let’s say 192.168.2.98, this is what happens:

  1. The client uses it’s IP and netmask to find out that the packet cannot be delivered directly. It sends it to the default gateway (my ZyAir)
  2. The ZyAir consults its routing table to watch for the route to 192.168.2.0/24 and finds 192.168.3.240 as gateway for that network (every other address would have been routed thorugh my cable modem)
  3. 192.168.3.240, shion, watches it’s own roting table where OpenVPN has created a route thorugh the VPN-interfaces (10.8.0.x) to the 192.168.2.0/24 network. It delivers the packet there.
  4. On the other end of the tunnel, the OpenVPN-Server delivers the packet to the destination server.

The path of the reply-packets is the same – just from the bottom to the top.

After getting the routing as I wanted it (verifyable by pinging petween computers in both networks), the next step was pure cosmetics:

  • Create an internal DNS-server. Use it as a slave for the office’s DNS-server to allow for DNS-lookups to work without crossing the VPN each time
  • Use said DNS-server to create entries for the computers in my home network
  • Make the office DNS-server a slave for that home-zone (to reach my computers by name)

All of this was most interesting to implement and went much more smootly than anything else I’ve tried so far VPN-wise. Finally, I have the optimum solution concering connectivity to my office.

And besides that: It was fun to implement. Just worthy of a “Supreme nerd” – the title I got here for my 92 points.

FreeNX

nx.png

FreeNX is the GPLed variant of NoMachines NX product.

While exporting X-Sessions never has been a problem, it was kind of slow especially on connections with limited bandwidth. NX tries to solve this by using some tricks at the X11-protocol level, a little proxy-server and a big local bitmap cache. They promise fluently working X-Sessions even over a 56K modem.

Well. I have installed KDE and now FreeNX on my Mac Mini, which I bought for the sole purpose of being a little home-server/VPN-Gateway. My NSLU2 while being a really nice little thing, does not work with OpenVPN due to the kernel lacking TUN-support.

Installation was easy and flawless – besides forcing me to forward port 5000 to the NATed mac mini as the commercial (freeware) windows-client seems to have problems with the FreeNX-server when tunneling the X-Session over ssh.

The client works very well too. And I can say: It’s fast. Very, very fast.

Some more things to note about the screenshot:

  • While I usually had the policy to name servers after persons and then locations from “lord of the rings”, I somewhat run out of names, so I began using names from RPGs. My Mac Mini is called Shion, after Shion Uzuki of Xenosaga.
  • I’m running Gentoo, of course.
  • Installing FreeNX is as easy as emerge nxserver-freenx on Gentoo.
  • The screenshot is of a session exported at 800×600 pixels. Using more pixels does not slow down the session siginficantly, but those 800×600 where comfortable to use on my current display so I can have other things besides the session.

Snom 190

The Snom 190 is a SIP hardware phone which I have ordered recently to continue my asterisk experiment.

Yesterday it arrrived.

I have to say: I love that device. Contrary to those proprietary PBX phones, the Snom 190 is easy to use, provides a big heap of features (complete remote managability, web interface, dialing over http-request (outlook-plugin – here I come)) and does not cost more than what the other companies ask for their lowest entry level phones. The Snom even looks good!

Like many other devices today, the Snom 190 runs Linux (2.4), though this time I have not tried to hack it yet. All the sources including the developement environement are available at the website of snom.

Contrary to the somewhat crappy ZyXel 2000W which I have tested too, the Snom 190 is ready for productive business.

This makes implementing VoIP at our company seem more and more likely every day.

The greatest gadget ever

Recently I though: “well… having this iMac as server is all nice and well, but what about having all that a little more like embedded? What about not having to have this iMac running all the time? After all, it is not always as silent as I would have whished it to be. And I really wanted to have something more “hackish”

So I went after the Linksys WRT54G. There are two ROM’s you can flash on it: On one hand the more or less proprietary ROM by Sveasoft and on the other hand the ROM by OpenWRT, the last one being the only one actually allowing to install packages.

I bought myself one of those linksys-thingies and I was less then pleased. The ROM by Sveasoft worked well by adding some extended features to the device, but not allowing me to install anything (or even change configuration files). OpenWRT fixed that readonly-thing, but I could not get WPA to work.

After all, the device is of limited use as a home-server. The storage you have at your disposal is just too limited, so I went out to fix that problem.

The fist thing that came to my mind is one of those “Network Harddrives” – poor mans NAS.

I went to one of those big retailers and found the Linksys NSLU2, which enables externally plugged USB-drives to be exported via CIFS (or SMB or SAMBA or whatever you call it).

Before doing anything with the device – having in mind Linksys’ relation to Linux, I googled around a bit and found NSLU2 Linux

After getting it installed (the root-password thing was a bit tricky, but consequent RTFM helped here), I was slowly getting very, very impressed.

What you get is the usual down-stripped linux-distribution, but the root-fs is writable, so you can change the configuration in-place. Then, you can use the attached harddrive as storage for additional software, thus working around the single problem I’ve had with the wrt54g: In-extensibility

After you install the basic distribution, there’s little more than 1 MByte of free space on the flash-rom of the device itself. But there’s this script, unslug that enables the device plugged to the first USB-port as storage for additional software. And additional software, there’s plenty of.

After installing the package unslug-feeds (with ipkg install unslug-feeds) you gain access to this repository containing software like Apache, PHP, Postgresql, a bittorrent-client, cups, perl (for Slimp3),… just all you need on a decent linux distribution (and more less-useful stuff like OpenLDAP). You even get asterisk – and there’s a way to install additional USB-drivers. If only AVM would provide kernel modules for the ARM-kernel running on the device. Then, the NSLU2 would be the smallest PBX on this planet.

The best thing is: While the firmware by linksys does not allow it, with the improved version, you can plug an USB-Stick into the first USB-port and use that as target for additional software installation.

This allows for installing a complete linux distribution on a device with no mechanical parts whatsoever. No PC you’re going to build yourself will even be so silent. Neither is my iMac. Finally a home-server not making any sound at all. This is great.

Because I have no USB-stick at hand, I have not run unslug yet, but I will tomorrow.

Then I’m going to plug my newly bought external 250GB harddisk to the second USB-port and use that for storage for a bittorrent client I’m eventually going to install on the USB stick. And for my MP3’s which a Squeezebox-Server installed on the USB-Stick will serve. So, when I’m not asleep, I turn on the HDD to serve MP3’s to the Squeezebox. When I’m going to sleep, I just turn the HD off, keeping the rest of the server running.

This little device is so extremely great. I really really like it so far and I can’t wait to see it to work at it’s fullest potential.

This is the best CHF 150.- I’ve ever spent in my whole live.