31337 OOP code?

In the current issue of php | architect, there’s an article about “enterprise ready” session management. While it provides a nice look about how to structure your application (besides the capital mistake of endorsing a multiple-entry application structure – but I’ll save that for another post) and about some design-patterns, I have one big objection to the article: It’s basically saying that the $_SESSION-things in PHP are not enterprise-ready. The article names three reasons:

  1. It is not OOP enough
  2. The Session-ID is guessable
  3. The storage location for the session-data does not work with load balancers

The article then goes fruther and writes a complete replacement for PHP’s session API

Now. Le’ts have a look those points:

Point 3 is valid. If you load balancer cannot guarantee that each subsequent request from a user goes to the same server, /tmp is not a good place to store session data. What the article does not tell you is that most load balancers actually do make that guarantee. Reading the session-data from a file, unserializing it, using it, serializing it, storing it to a file probably is faster than doing the same thing with a database. Maybe you should do some testing and then deceide – at least when you have the real enterprise-grade-load balancers at your disposal.

Point 2 is also somewhat true, but the workaround provided by the article is not any better than what PHP already does. I especially dislike taking a hash of the first two octets of the IP-adress for protection against session spoofing. Hey. 2 octets of IP-range are not checked. This are 65536 addresses. Say I want to spoof sessions on your site, instead of those 4 billions of users I only have 65 thousand to try it with, but let’s say even only 1% of the users in said range do some online financial transactions on your site, it’s worth it for me. I just make an accaount at a particular ISP and try out my range.

It’s unfair to say PHP’s session ID generation is weak because it uses the systems time (amongst other things) and then create a replacement algortihm using the systems time (amongst other things).

The idea with the second ID is somewhat valid, but does not protect at all against network-based attacks (listening on the network and sending a valid request)

My biggest concern – the one that actually made me write this – is point 1. Tell me: What’s better at




As I see it, the first version has three distinct disadvantages:

  • Depending on the state of PHP’s optimizer, this involves two function calls (in PHP userland code – and maybe countless others in the backend) per variable you query (and with the proposed implementation one additional database query(!)). Function calls are expensive. This is inperformant. Not with two to three queries but with maybe 100 or 1000 per second
  • The second method is the one documented and endorsed by PHP. Any coder you will find will know what it means, and how to work with it. Whenever you hire a new coder, he immediately will understand your session management code and will be able to concentrate on the business logic. The first method does not have this advantage. It’s just another hurdle for the coder to take before being able to be productive. A needless hurdle
  • It’s more code. More to type. More work to do. Thus inefficient for your programmers.

Saying the first one is better because it’s more OOP is like saying “I am more 31337 than you because I’m using Windows”, or “rogues in world of warcraft are more 31337 than warriors” or … take your pick (a phrase involving vi and emacs springs to mind).

So. From the three points the author of the article had to present, only one, maybe two are valid. Does this justify dumping the whole session management functionality in PHP? No it does not. Dumping ready-to-use funcationality is always bad. Especially if the funtionality you want to dump is extendable (and thus fixable for your purpose).

The PHP session management can be customized! Just have a look at the manual. There is session.save_handler, session.serialize_handler. There’s even session.entropy_file

So after all, another of those people trying to be god-like by writing about the enterprise without really knowing what it means. The java world is full of such individuals. And now PHP is getting them too. The price for being known? Maybe.

The most pleasant installation experience

The most pleasant experience I ever had when installing a webbased application was when I was installing Gallery 2 Beta 1. I’ve never seen such a polished assistant. I’ve never seen a webbased installer work so well as the Gallery one did.

While I was really, really happy with this, I have not blogged about it (shame on me).

But now that I have updated to Beta 3, this really, really is cause for a blog entry.

The update-process uses the same assistant-type as the installer and is just as pleasant and unproblematical as the installation process. Call your gallery, read, click “next”, repeat. Done. Fast, pleasant and error-free.

Congratulations to the gallery team. You rock!

On and the gallery is here

Lots of fun with OpenVPN

OpenVPN may seem to you as being “just another VPN solution”. And maybe you are right.

However, OpenVPN has some distinct advantages over other VPN-solution that makes it quite interesting for deployment:

  • NAT traversal. OpenVPN uses plain old UDP-Packages as a transport medium. Every NAT router on this world can forward them correctly out-of-the-box. If not, create the usual port-forwarding rule and be done with it. If that fails too (whyever it could fail), use the TCP-protocol.
  • Ease-of-use: Install, create two certificates, use the VPN. It’s as easy as 1-2-3
  • Designed with small installations in mind. OpenVPN is not a big slow beast like IPSec for example. While it may not be as secure, it does not have all the problems associated with IPSec.
  • User-Space. OpenVPN runs completely in userspace (while using the TUN device provided by the kernel). This way the installation is non-critical and does require no reboots. Updates in case of security problems do not require reboots either.

So after this unexpected praise: What brings me to writing this posting?

Well. I’ve just deployed one of the coolest things on earth: Using OpenVPN, I have connected my home network to the network in the office. Both ends see each other and allow for direct connections. I’m not only able to print on the offices printers from home (which admittedly is as useless as it is cool), but I’m also able to – for example – stream music from home to the office over a secured channel. All using straight IP connections without any NAT-trickery or other things.

Actually not even one port is forwarded through my NAT-gateway (a ZyAir B-2000 – as the Airport-Basestation does not allow for static routes (see below), I was forced to cross-grade).

I already had some of this functionality using my previously deployed PPTP-setup, though this had some disadvantages:

  • Flacky support in Linux. Maintaining the beast across windows- and mac versions was not easy as something always broke on new versions.
  • Suboptimal security. You know: PPTP has flaws – quite like WEP. Though I’ve tried to work around them by using very very long passwords.
  • Suboptimal usability: When I wanted to connec to the office, I had to dial into the VPN, so user interaction was needed. Additionally, the default-gateway was redirected (I could have turned that off), so all open TCP connections got disconnected when I dialled.

My current solution does not have any of those problems (I don’t know about the security of course – no one does. For now, OpenVPN is said to be secure): No dialling is required, no problems with changing software-versions are to be expected (as it runs on a dedicated router which I don’t intend on changing), and I don’t have to dial in. The default gateway is not changed either of course, so the usual internet-connections go out directly. This way I’m unaffected from the office’s suboptimal upstream of 65KBytes/s (unless I use services from the office of course – but this is unavoidable).

So. What did I do?

At the very first, I had to recompile the kernel on the server side once. I have not included TUN-support when I created my .config last year. After this, emerge openvpn was all that was needed. I kept the default configuration-file somewhat intact (install with the “examples” USE-flag and use the example-server.conf), but made some minor adjustments:

local x.x.x.x
push "route"
client-config-dir ccd
#push "redirect-gateway"

(just the changed lines)

and the /etc/openvpn/ccd/Philip_Hofstetter:


Now, what does this configuration do?

  • Bind to the external interface only. This has only cosmetical reasons
  • Push the route to the internal network to the client. Using the default configuration, all OpenVPN-Addresses are in the network which allows me for nice firewall-settings on the server-side. The network is our office-network
  • Tell OpenVPN that there are some client-specific configuration options to reach the net which is my home network
  • Comment out the option to let OpenVPN set the default gateway. We really don’t want all the traffic in my home net going through the office

Then we create this client-configuration file. It’s named after the CN you use in the SSL-certificate, while replacing spaces with underscores. You can see the correct value by setting up everything and then connecting to the server while watching the logfile.

In the client specific configuration-file we confirm the additional route we want to create.

The configuration file on the client router is unchanged from the default.

The only thing you need now is the SSL-certificate. Create one for the server and more for each client. I won’t go into this in this article as it’s somewhat complicated on itself, but you’ll find lots of guides out there.

I used our companies CA to create the certificates for both the server and the client.

After this, it’s just a matter of /etc/init.d/openvpn start on both machines (the path to the certificates/keys in the configuration files must match your created files of course).

Just watch out for the routing: On the server I had to change nothing as the server was already entered as default gateway on all the clients in the office network.

In the client network, I had to do some tweaking as the default-gateway was set to the Airport Basestation, which (understandably) knew nothing about the network, so was unable to route the IP-packets to the VPN-gateway in the internal network (my Mac Mini).

Usually you solve that by installing a static route on the default gateway in your network. Unfortunately, this is not possible on an airport basestation. A problem I have solved by replacing it with a ZyAir B-2000 from Zyxel which allows for setting static routes.

On that new access-point I created a route equivalent to this unix-command:

route add -net netmask gw

Where is the address of my Mac Mini on which OpenVPN was running as client.

Then I issued “echo 1 > /proc/sys/net/ipv4/ip_forward” on the Mac Mini to allow the packets to be forwarded.

So whenever I send packets to one of the offices computers – let’s say, this is what happens:

  1. The client uses it’s IP and netmask to find out that the packet cannot be delivered directly. It sends it to the default gateway (my ZyAir)
  2. The ZyAir consults its routing table to watch for the route to and finds as gateway for that network (every other address would have been routed thorugh my cable modem)
  3., shion, watches it’s own roting table where OpenVPN has created a route thorugh the VPN-interfaces (10.8.0.x) to the network. It delivers the packet there.
  4. On the other end of the tunnel, the OpenVPN-Server delivers the packet to the destination server.

The path of the reply-packets is the same – just from the bottom to the top.

After getting the routing as I wanted it (verifyable by pinging petween computers in both networks), the next step was pure cosmetics:

  • Create an internal DNS-server. Use it as a slave for the office’s DNS-server to allow for DNS-lookups to work without crossing the VPN each time
  • Use said DNS-server to create entries for the computers in my home network
  • Make the office DNS-server a slave for that home-zone (to reach my computers by name)

All of this was most interesting to implement and went much more smootly than anything else I’ve tried so far VPN-wise. Finally, I have the optimum solution concering connectivity to my office.

And besides that: It was fun to implement. Just worthy of a “Supreme nerd” – the title I got here for my 92 points.



FreeNX is the GPLed variant of NoMachines NX product.

While exporting X-Sessions never has been a problem, it was kind of slow especially on connections with limited bandwidth. NX tries to solve this by using some tricks at the X11-protocol level, a little proxy-server and a big local bitmap cache. They promise fluently working X-Sessions even over a 56K modem.

Well. I have installed KDE and now FreeNX on my Mac Mini, which I bought for the sole purpose of being a little home-server/VPN-Gateway. My NSLU2 while being a really nice little thing, does not work with OpenVPN due to the kernel lacking TUN-support.

Installation was easy and flawless – besides forcing me to forward port 5000 to the NATed mac mini as the commercial (freeware) windows-client seems to have problems with the FreeNX-server when tunneling the X-Session over ssh.

The client works very well too. And I can say: It’s fast. Very, very fast.

Some more things to note about the screenshot:

  • While I usually had the policy to name servers after persons and then locations from “lord of the rings”, I somewhat run out of names, so I began using names from RPGs. My Mac Mini is called Shion, after Shion Uzuki of Xenosaga.
  • I’m running Gentoo, of course.
  • Installing FreeNX is as easy as emerge nxserver-freenx on Gentoo.
  • The screenshot is of a session exported at 800×600 pixels. Using more pixels does not slow down the session siginficantly, but those 800×600 where comfortable to use on my current display so I can have other things besides the session.