Amazing Ubuntu

I must say, I’m amazed how far Ubuntu Linux has come in the last 6 months.

When I tried 5.10 last october, it was nice, but it was still how I experienced Linux ever since I tried it out on the desktop – Flaky: WLAN didn’t work, DVDs didn’t work, Videos didn’t work (well… they did, but audio and video desynched after playing for more than 10 seconds), fonts looked crappy compared to Windows and OS X and suspend and hibernate didn’t work (or rather worked too well – the notebook didn’t come up again after suspending / hibernating).

I know, there were tutorials explaining how to fix some of the problems, but why working through tons of configuration files when I can also just install Windows or OSX and have it work out-of-the box?

Now, yesterday, I installed Ubuntu 6.06 on my Thinkpad T42.

Actually, I tried updating my 5.10 installation, but after doing so, my network didn’t work any longer. And in comparison with Windows and OSX and even Gentoo Linux where the fix is obvious or well documented with useful error messages, I had no chance in fixing it in Ubuntu on short notice.

Seeing that I had no valuable data on the machine, I could just go ahead with the reinstallation.

WPA still didn’t work with the tools provided by default. Now, we all know that WEP is not safe any more and in my personal experience is much flakyer than WPA (connections dropping or not even getting up). How can a system like Linux which is that security-centered not support WPA? Especially as it also works better than WEP.

To Ubuntu’s credit I have to say, that a tool, NetworkManager to fix WPA on the desktop was released post-feature-freeze. If you know what to do, it’s just a matter of installing the right packages to get it to work (and fixing some strange icon resource error preventing the gnome applet from starting).

Aside the connectivity issue (you won’t read any praise for NetworkManager here as a tool like that is nothing special in any other OS which is designed for desktop-use), the Ubuntu Experience was a very pleasant one.

Syspend to RAM worked (Hibernate didn’t – it doesn’t even hibernate). Fonts looked OK. And best of all:

I was able to play Videos (even HD with sufficient performance) and watch a DVD. Hassle-free.

Granted, I had to install some legally not-so-safe packages (with the help of EasyUbuntu which does the hard work for you), but you’d have to do that on any other system aswell, so that’s ok IMHO.

This was a real plesant experience.

And in the whole process I only got three or four meaningless error-messages or stuff not-working silently which is supposed to work according to the documentation.

I’m good enough with computers to fix stuff like that and I had enough time to do it, so I’m not very upset about that, but I’ll only recommend Ubuntu as a real desktop OS once I can install it on a machine and connect to my home network without cryptic error messages and as cryptic fixes (that NetworkManager-bug).

Still: They’ve come a really long way in the past 6 months. Ubuntu is the first Linux distribution ever that manages to play an AVI video and a DVD without forcing me to tweak around for at least two hours.

More disk space needed

Can somebody explain me, why my Mac OS X needs 4 TB of diskspace to encrypt my home directory which currently is about 15 GB in size?

Before I got this message, it wanted me to free another 1KB, btw. When I did that and retried, this message popped up. Unfortunately, I can’t reproduce that other message though.

A praise to VMWare Server

putty.png

This is putty, showing the output of top on one of our servers. You may see that there are three processes running which are obviously VMWare related.

What’s running there is their new VMWare Server. Here’s a screenshot of the web-interface which gives an overview over all running virtual machines and allowing to attach a remote console to anyone of them:

web.png

As you can see, that server (which is not a very top-notch one) has more than enough capacity to do the work for three servers: A gentoo test machine and a Windows 2003 Server machine doing some reporting work.

Even under high load on the host machine or the two virtual machines, the whole system remains stable and responsive. And there’s so much work needed to even get the VM’s to high load, so that this configuration could even be used in production right now.

Well… what’s so great about this, you might ask.

Running production servers in virtual machines has some very nice advantages:

  • It’s hardware independant. You need more processing power? More ram? Just copy the machine to a new machine. No downtime, no reinstallation.
  • Need to move your servers to a new location? Easy. Just move one or two machines instead for five or more.
  • It’s much easier to administer. Kernel update with the system not booting any more? typing “shutdown -h” instead of “shutdown -r” (both happened to me)? Well… just attach the remote console. No visiting the housing center anymore
  • Cost advantage. The host-server you see is not one of the largest ones ever. Still it’s able to handle real-world-traffic for three servers and we still have reserve for at least two more virtual machines. Why buy expensive hardware?
  • Set up new machines in no time: Just copy over the template VM-folder and you’re done.

And in case you wonder about the performance? Well, the VM’s don’t feel the slightest bit slower than the host (I’ve not benchmarked anything yet though).

We’re currently testing this to put a configuration like this into real production use, but what I’ve seen so far looks very, very promising.

Even though I don’t think we’re going to need support for this (it’s really straight-forward and stable), I’m more than willing to pay for a fine product like this one (the basic product will be free, while you pay for the support).

Now, please add a native 64bit edition ;-)

mp3act

When you have a home server, sooner or later your coworkers and friends (and if all is well even both in one person ;-) ) will want to have access to your library

Cablecom, my ISP, has this nice 6000/600 service, so there’s plenty of upstream for others to use in principle. And you know: Here in Switzerland, the private copy among friends is still legal.

Well, last sunday it was time again. Richard wanted access to my large collection of audiobooks and if you know me (and you do as a reader of this blog), you’ll know that I can’t just give him those files on a DVD-R or something. No. A webbased mp3-library had to be found.

Last few times, I used Apache::MP3, but that grew kinda old on me. You know: It’s a perl module and my home server does not have mod_perl installed. And I’m running Apache 2 for which Apache::MP3 is not ported yet AFAIK. And finally, I’m far more comfortable with PHP, so I wanted something written in that language so I could make a patch or two on my own.

I found mp[3]actmp3act which is written in PHP and provides a very, very nice AJAX based interface. Granted. It breaks the back-button, but everything else is very well done

And it’s fast. Very fast.

Richard liked it and Christoph is currently trying to install it on his windows server, not as successful as he wants to be. mp3act is quite Unix-Only currently.

The project is in an early state of developement and certainly has a rough end here and there, but in the end, it’s very well done, serves its need and is even easily modifiable (for me). Nice.

Opera Mini

Today, Opera released Opera Mini, a browser written in Java for all the Java capable mobile phones out there

By the use of a special proxy server, they manage to both minimize the traffic a usual browsing session generates and to keep the application as performant as possible.

When I tried to download the application via WAP, all I got was an ‘Invalid Jad Request’-Error (wahtever that meant), but with some sneakyness, I found the download URL for the jar file none the less (the linked version is the high-memory version. There’s another for less advanced phones)

I copied the file over to my K750i via bluetooth which is cheaper than downloading it and it also had the advantage of actually working.

The browser is very nice. While it uses quite some time to finally launch, surfing is very quick. And the very good font rendering (of course operas small screen rendering is active aswell) makes this a pleasure to use and is the first justification why a phone should have a screen resolution as big as the K750i’s

And the most interesting thing: Opera uses the default internet GPRS profile. Not the WAP one. This makes surfing via opera a whole lot cheaper than via the built-in wap browser of my phone.

Congratulations, Opera. This rules!

(and thanks, Christoph, for pointing it out to me)

The most pleasant installation experience

The most pleasant experience I ever had when installing a webbased application was when I was installing Gallery 2 Beta 1. I’ve never seen such a polished assistant. I’ve never seen a webbased installer work so well as the Gallery one did.

While I was really, really happy with this, I have not blogged about it (shame on me).

But now that I have updated to Beta 3, this really, really is cause for a blog entry.

The update-process uses the same assistant-type as the installer and is just as pleasant and unproblematical as the installation process. Call your gallery, read, click “next”, repeat. Done. Fast, pleasant and error-free.

Congratulations to the gallery team. You rock!

On and the gallery is here

Lots of fun with OpenVPN

OpenVPN may seem to you as being “just another VPN solution”. And maybe you are right.

However, OpenVPN has some distinct advantages over other VPN-solution that makes it quite interesting for deployment:

  • NAT traversal. OpenVPN uses plain old UDP-Packages as a transport medium. Every NAT router on this world can forward them correctly out-of-the-box. If not, create the usual port-forwarding rule and be done with it. If that fails too (whyever it could fail), use the TCP-protocol.
  • Ease-of-use: Install, create two certificates, use the VPN. It’s as easy as 1-2-3
  • Designed with small installations in mind. OpenVPN is not a big slow beast like IPSec for example. While it may not be as secure, it does not have all the problems associated with IPSec.
  • User-Space. OpenVPN runs completely in userspace (while using the TUN device provided by the kernel). This way the installation is non-critical and does require no reboots. Updates in case of security problems do not require reboots either.

So after this unexpected praise: What brings me to writing this posting?

Well. I’ve just deployed one of the coolest things on earth: Using OpenVPN, I have connected my home network to the network in the office. Both ends see each other and allow for direct connections. I’m not only able to print on the offices printers from home (which admittedly is as useless as it is cool), but I’m also able to – for example – stream music from home to the office over a secured channel. All using straight IP connections without any NAT-trickery or other things.

Actually not even one port is forwarded through my NAT-gateway (a ZyAir B-2000 – as the Airport-Basestation does not allow for static routes (see below), I was forced to cross-grade).

I already had some of this functionality using my previously deployed PPTP-setup, though this had some disadvantages:

  • Flacky support in Linux. Maintaining the beast across windows- and mac versions was not easy as something always broke on new versions.
  • Suboptimal security. You know: PPTP has flaws – quite like WEP. Though I’ve tried to work around them by using very very long passwords.
  • Suboptimal usability: When I wanted to connec to the office, I had to dial into the VPN, so user interaction was needed. Additionally, the default-gateway was redirected (I could have turned that off), so all open TCP connections got disconnected when I dialled.

My current solution does not have any of those problems (I don’t know about the security of course – no one does. For now, OpenVPN is said to be secure): No dialling is required, no problems with changing software-versions are to be expected (as it runs on a dedicated router which I don’t intend on changing), and I don’t have to dial in. The default gateway is not changed either of course, so the usual internet-connections go out directly. This way I’m unaffected from the office’s suboptimal upstream of 65KBytes/s (unless I use services from the office of course – but this is unavoidable).

So. What did I do?

At the very first, I had to recompile the kernel on the server side once. I have not included TUN-support when I created my .config last year. After this, emerge openvpn was all that was needed. I kept the default configuration-file somewhat intact (install with the “examples” USE-flag and use the example-server.conf), but made some minor adjustments:

local x.x.x.x
push "route 192.168.2.0 255.255.255.0"
client-config-dir ccd
route 192.168.3.0 255.255.255.0
#push "redirect-gateway"

(just the changed lines)

and the /etc/openvpn/ccd/Philip_Hofstetter:

iroute 192.168.3.0 255.255.255.0

Now, what does this configuration do?

  • Bind to the external interface only. This has only cosmetical reasons
  • Push the route to the internal network to the client. Using the default configuration, all OpenVPN-Addresses are in the 10.8.0.0 network which allows me for nice firewall-settings on the server-side. The 192.168.2.0/24 network is our office-network
  • Tell OpenVPN that there are some client-specific configuration options to reach the 192.168.3.0/24 net which is my home network
  • Comment out the option to let OpenVPN set the default gateway. We really don’t want all the traffic in my home net going through the office

Then we create this client-configuration file. It’s named after the CN you use in the SSL-certificate, while replacing spaces with underscores. You can see the correct value by setting up everything and then connecting to the server while watching the logfile.

In the client specific configuration-file we confirm the additional route we want to create.

The configuration file on the client router is unchanged from the default.

The only thing you need now is the SSL-certificate. Create one for the server and more for each client. I won’t go into this in this article as it’s somewhat complicated on itself, but you’ll find lots of guides out there.

I used our companies CA to create the certificates for both the server and the client.

After this, it’s just a matter of /etc/init.d/openvpn start on both machines (the path to the certificates/keys in the configuration files must match your created files of course).

Just watch out for the routing: On the server I had to change nothing as the server was already entered as default gateway on all the clients in the office network.

In the client network, I had to do some tweaking as the default-gateway was set to the Airport Basestation, which (understandably) knew nothing about the 192.168.2.0/24 network, so was unable to route the IP-packets to the VPN-gateway in the internal network (my Mac Mini).

Usually you solve that by installing a static route on the default gateway in your network. Unfortunately, this is not possible on an airport basestation. A problem I have solved by replacing it with a ZyAir B-2000 from Zyxel which allows for setting static routes.

On that new access-point I created a route equivalent to this unix-command:

route add -net 192.168.2.0 netmask 255.255.255.0 gw 192.168.3.240

Where 192.168.3.240 is the address of my Mac Mini on which OpenVPN was running as client.

Then I issued “echo 1 > /proc/sys/net/ipv4/ip_forward” on the Mac Mini to allow the packets to be forwarded.

So whenever I send packets to one of the offices computers – let’s say 192.168.2.98, this is what happens:

  1. The client uses it’s IP and netmask to find out that the packet cannot be delivered directly. It sends it to the default gateway (my ZyAir)
  2. The ZyAir consults its routing table to watch for the route to 192.168.2.0/24 and finds 192.168.3.240 as gateway for that network (every other address would have been routed thorugh my cable modem)
  3. 192.168.3.240, shion, watches it’s own roting table where OpenVPN has created a route thorugh the VPN-interfaces (10.8.0.x) to the 192.168.2.0/24 network. It delivers the packet there.
  4. On the other end of the tunnel, the OpenVPN-Server delivers the packet to the destination server.

The path of the reply-packets is the same – just from the bottom to the top.

After getting the routing as I wanted it (verifyable by pinging petween computers in both networks), the next step was pure cosmetics:

  • Create an internal DNS-server. Use it as a slave for the office’s DNS-server to allow for DNS-lookups to work without crossing the VPN each time
  • Use said DNS-server to create entries for the computers in my home network
  • Make the office DNS-server a slave for that home-zone (to reach my computers by name)

All of this was most interesting to implement and went much more smootly than anything else I’ve tried so far VPN-wise. Finally, I have the optimum solution concering connectivity to my office.

And besides that: It was fun to implement. Just worthy of a “Supreme nerd” – the title I got here for my 92 points.

FreeNX

nx.png

FreeNX is the GPLed variant of NoMachines NX product.

While exporting X-Sessions never has been a problem, it was kind of slow especially on connections with limited bandwidth. NX tries to solve this by using some tricks at the X11-protocol level, a little proxy-server and a big local bitmap cache. They promise fluently working X-Sessions even over a 56K modem.

Well. I have installed KDE and now FreeNX on my Mac Mini, which I bought for the sole purpose of being a little home-server/VPN-Gateway. My NSLU2 while being a really nice little thing, does not work with OpenVPN due to the kernel lacking TUN-support.

Installation was easy and flawless – besides forcing me to forward port 5000 to the NATed mac mini as the commercial (freeware) windows-client seems to have problems with the FreeNX-server when tunneling the X-Session over ssh.

The client works very well too. And I can say: It’s fast. Very, very fast.

Some more things to note about the screenshot:

  • While I usually had the policy to name servers after persons and then locations from “lord of the rings”, I somewhat run out of names, so I began using names from RPGs. My Mac Mini is called Shion, after Shion Uzuki of Xenosaga.
  • I’m running Gentoo, of course.
  • Installing FreeNX is as easy as emerge nxserver-freenx on Gentoo.
  • The screenshot is of a session exported at 800×600 pixels. Using more pixels does not slow down the session siginficantly, but those 800×600 where comfortable to use on my current display so I can have other things besides the session.

AWStats

For the last five years or so, I’ve been using ModLogAn for my/our web analyzing needs: The tool is fast and much more powerful than Webalizer which I was using before modlogan

Getting it to run was a bit difficult at first (requiring a hacked GD library and all that), but this gradually got better. Since then the tool does a wonderful job (except one broken release about three years ago).

With all this buzz about the phpBB.com incident which happened because of a hole in AWStats, I wanted to give said tool (in a fixed version – of course) a shot.

The gentoo ebuild is tightly integrated into webapp-config which I’ve not used before, so the installation was somewhat difficult for me, but some symlinks here and there soon brought me a working setup.

I must say that I’m impressed of the tools capabilities: It’s quite fast (not as fast as modlogan, but fast enough), its CGI user interface profits from its dynamical nature (filtering long lists in realtime for example), the plugins provided with it are very cool (geoip, whois,…) and as soon as one understands how it ticks, it’s really easy to configure and manage.

Useful for some people is its possibility to update the statistics in realtime by analyzing the current rotation of the logfile. Another thing, modlogan isn’t capable of.

And finally it’s the looks – as always. awstats looks much more pleasant than modlogan does (even when using the template-plugin which has the nicest look of all of them).

I’ve not deceided yet whether I should replace the currently well-working modlogan-setup or not, but I’ve certainly analyzed the whole backlog of gnegg.ch (link to the tool removed due to gnegg.ch redesign).

IRC Clients

When my favourite game movies site (written about it here and here) went offline last week, I ventured a look into its IRC channel to find out what’s going on.

Chatting with the guys there was so much fun that I deceided that it’s time to get into IRC after all (I never really used it before, so I did not really have a big insight into this part of the net)

Soon after this decision, I began learning the ins and outs of IRC and the first thing I did was setting up a bouncer (IRC-proxy – let’s you be logged into a channel despite your client machine being offline. Very useful for getting an overview on what happened while you were away). There are quite many available, but the only one that seems to be still maintained is ctrlproxy

If you plan on using mIRC with it, go and install the current pre-release 2.7pre2. Older versions don’t let you connect.

Next was the question which client to use.

While mIRC is nice it has two problems: a) it’s single-platform. As I’m constantly using all three of Win/Mac/Linux, a single program would be nice so I don’t have to relearn all the shortcuts for each platform. b) It does not look very polished and cannot be made to do so.

Klient looks much better, but is still single-platform and has problems recognizing the state when reconnecting to the ctrlproxy (it sometimes does not notice that you are already in a channel).

virc looks better than mirc, but worse than Klient. Plus, it seemed a bit unstable to me. And it was slow displaying the backlog. Very slow. It’s single-platform too (and written in Delphi it seems)

irssi is single-platform too, but I could work around that by running it on our webserver and using screen.

A program that warns with

17:43 -!- Irssi: Looks like this is the first time you've run irssi.
17:43 -!- Irssi: This is just a reminder that you really should go read
17:43 -!- Irssi: startup-HOWTO if you haven't already. You can find it
17:43 -!- Irssi: and more irssi beginner info at http://irssi.org/help/
17:43 -!- Irssi:
17:43 -!- Irssi: For the truly impatient people who don't like any automatic
17:43 -!- Irssi: window creation or closing, just type: /MANUAL-WINDOWS

before starting it and with no obvious way to exit it (Ctrl-C, quit, exit – neither did work) is something I’m afraid of (quite like vim, though I learnt to love that one). So: no-go

Finally I ended up with X-Chat. It looks good, has all features I need, a big userbase, is maintained and is multiplatform after all.

There was this fuss about the windows version becoming shareware, but I can live with that as the tool is very, very good. For supporting it’s author, I gladly payed those $20 (I see it as a packaging fee – just like with those linux distributions), though you can get a windows binary for free here.

So for me, it’s X-Chat. And much fun in #nesvideos