Shell history stats

It seems to be cool nowadays to post the output of a certain unix command to ones blogs. So here I come:

pilif@celes ~
 % fc -l 0 -1 |awk '{a[$2]++ } END{for(i in a){print a[i] " " i}}'|sort -rn|head
467 svn
369 cd
271 mate
243 git
209 ssh
199 sudo
184 grep
158 scp
124 rm
115 ./clitest.sh

clitest.sh is a small little wrapper around wget which I use to do protocol level debugging of the PopScan Server.

Impressed by git

The company I’m working with is a Subversion shop. It has been for a long time – since fall of 2004 actually where I finally decided that the time for CVS is over and that I was going to move to subversion. As I was the only developer back then and as the whole infrastructure mainly consisted of CVS and ViewVC (cvsweb back then), this move was an easy one.

Now, we are a team of three developers, heavy trac users and truly dependant on Subversion which is – mainly due to the amount of infrastructure that we built around it – not going away anytime soon.

But none the less: We (mainly I) were feeling the shortcomings of subversion:

  • Branching is not something you do easily. I tried working with branches before, but merging them really hurt, thus making it somewhat prohibitive to branch often.
  • Sometimes, half-finished stuff ends up in the repository. This is unavoidable considering the option of having a bucket load of uncommitted changes in the working copy.
  • Code review is difficult as actually trying out patches is a real pain to do due to the process of sending, applying and reverting patches being a manual kind of work.
  • A pet-peeve of mine though is untested, experimental features developed out of sheer interest. Stuff like that lies in the working copy, waiting to be reviewed or even just having its real-life use discussed. Sooner or later, a needed change must go in and you have the two options of either sneaking in the change (bad), manually diffing out the change (hard to do sometimes) or just forget it and svn revert it (a real shame).

Ever since the Linux kernel first began using Bitkeeper to track development, I knew that there is no technical reason for these problems. I knew that a solution for all this existed and that I just wasn’t ready to try it.

Last weekend, I finally had a look at the different distributed revision control systems out there. Due to the insane amount of infrastructure built around Subversion and not to scare off my team members, I wanted something that integrated into subversion, using that repository as the official place where official code ends up while still giving us the freedom to fix all the problems listed above.

I had a closer look at both Mercurial and git, though in the end, the nicely working SVN integration of git was what made me have a closer look at that.

Contrary to what everyone is saying, I have no problem with the interface of the tool – once you learn the terminology of stuff, it’s quite easy to get used to the system. So far, I did a lot of testing with both live repositories and test repositories – everything working out very nicely. I’ve already seen the impressive branch merging abilities of git (to think that in subversion you actually have to a) find out at which revision a branch was created and to b) remember every patch you cherry-picked…. crazy) and I’m getting into the details more and more.

On our trac installation, I’ve written a tutorial on how we could use git in conjunction with the central Subversion server which allowed me to learn quite a lot about how git works and what it can do for us.

So for me it’s git-all-the-way now and I’m already looking forward to being able to create many little branches containing many little experimental features.

If you have the time and you are interested in gaining many unexpected freedoms in matters of source code management, you too should have a look at git. Also consider that on the side of the subversion backend, no change is needed at all, meaning that even if you are forced to use subversion, you can privately use git to help you manage your work. Nobody would ever have to know.

Very, very nice.

Closed Source on Linux

One of the developers behind the Linux port of the new Flex Builder for Linux has a blog post about how building closed source software for linux is hard

Mostly, all the problems boil down to the fact that Linux distributors keep patching the upstream source to fit their needs which clearly is a problem rooted in the fact that open source software is, well, open sourced.

Don’t get me wrong. I love the concepts behind free software and in fact, every piece of software I’ve written so far has been open source (aside of most of the code I’m doing for my eployer of course). I just don’t see why every distribution feels the urgue to patch around upstream code, especially as this issue applies to both open- and closed source software projects.

And worse yet: Every distribution adds their own bits and pieces – sometimes doing the same stuff in different ways and thus making it impossible or at least very hard for a third party to create add-ons for a certain package.

What good is a plugin system if the interface works slightly different on each and every distribution?

And think of the time you waste learning configuration files over and over again. To make an example: Some time ago, SuSE delivered an apache server that was using a completely customized configuration file layout, thereby breaking every tutorial and documentation written out there because none of the directives where in the files they are supposed to be.

Other packages are deliberately broken up. Bind for example often comes in two flavors: The server and the client, even though officially, you just get one package. Additionally, every library package these days is broken up in the real library and the development headers. Sometimes the source of these packages may even get patched to support such breaking up.

This creates an incredible mess for all involved parties:

  • The upstream developer gets blamed for bugs she didn’t cause because they were introduced by the packager.
  • Third party developers can’t rely on their plugins or whatever pluggable components to work across distributions if they work upstream
  • Distributions have to do the same work over and over again as new upstream versions are released, thus wasting time better used for other improvements.
  • End users suffer from the general disability of reliably installing precompiled third-party binaries (mysql recommends the use of their binaries, so this even affects open sourced software) and from the inability to follow online-tutorials not written for the particular distribution that’s in use.

This mess must come to an end.

Unfortunately, I don’t know how.

You see: Not all patches created by distributions get merged upstream. Sometimes, political issues prevent a cool feature from being merged, sometimes clear bugs are not recognized as such upstream and sometimes upstream is dead – you get the idea.

Solution like FHS and LSB tried to standardize many aspects of how linux distributions should work in the hope of solving this problem. Bureaucracy and idiotic ideas (german link, I’m sorry) are causing quite the bunch of problems lately, making it hard to impossible to implement the standards. And often the standards don’t specify the latest and greatest parts of current technology.

Personally, I’m hoping that we’ll either end up with one big distribution defining the “state of the art”, with the others being 100% compatible or with distributions switching to pure upstream releases with only their own tools custom-made.

What do you think? What has to change in your opinion?

podcast recommendation

I haven’t been much into podcasts till now: The ones I heard were boring, unprofessional or way too professional. Additionally, I didn’t have a nice framework set up to get them and to listen to them.

That’s because I don’t often sync my ipod. Most of the time, it’s not connected to a computer: About once every two months, I connect it to upload a new batch of audiobooks (I can’t fit my whole connection on the nano). So podcasting was – even if I had found one that I could interest myself in, an experience to have while behind the computer monitor.

Now two things have changed:

  1. I found the Linux Action Show. They guy doing that podcast are incredibly talented people. The entries sound very professionally made, while still not being on the obviously commercial side of things. They cover very, very interesting topics and they are everything but boring. Funny, entertaining and competent. Very good stuff.
  2. At least since the release of SlimServer 6.5, my Squeezebox is able to tune into RSS feeds with enclosures (or podcast for the less technical savy people – not that those would read this blog). Even better: The current server release brought a firmware which finally gives the Squeezebox the capability of natively playing ogg streams.

    Up until now, it could only play FLAC, PCM and MP3, requiring tools like sox to convert ogg streams on the fly. Unfortunately, that didn’t work as stable as I would have liked, but native OGG support helped a lot

So now, whenever a new episode of the podcast is released (once per week – and each episode is nearly two hours in length), I can use my Squeezebox to hear it via my home stereo.

Wow… I’m so looking forward to do that in front of a cozy fire in my fireplace once I can finally move into my new flat.

Intel Mac Mini, Linux, Ethernet

If you have one of these new Intel Macs, you will sooner or later find yourself in the situation of having to run Linux on one of them. (Ok. Granted: The situation may be coming sooner for some than for others).

Last weekend, I was in that situation: I had to install Linux on an Intel Mac Mini.

The whole thing is quite easy to do and if you don’t need Mac OS X, you can just go ahead and install Linux like you would on any other x86 machine (provided the hardware is sufficiently new to have the BIOS emulation layer already installed – otherwise you have to install the Firmware Update first – you’ll notice by the mac not booting from the CD despite holding c during the initial boot sequence).

You can partition the disk to your liking – the Mac bootloader will notice that there’s something fishy with the parition layout (the question-mark-on-a-folder icon will blink one or two times) before passing control to the BIOS emulation which will be able to boot Linux from the partitions you created during installation.

Don’t use grub as bootloader though.

I don’t know if it’s something grub does to the BIOS or if it’s something about the partition table, but grub can’t launch stage 1.5 and thus is unable to boot your installation.

lilo works fine though (use plain lilo when using the BIOS emulation for the boot process, not elilo)

When you are done with the installation process, something bad will happen sooner or later though: Ethernet will stop working.

This is what syslog has to say about it:

NETDEV WATCHDOG: eth0: transmit timed out
sky2 eth0: tx timeout
sky2 eth0: transmit ring 60 .. 37 report=60 done=60
sky2 hardware hung? flushing

When I pulled the cable and plugged it in again, the kernel even oops’ed.

The macs have a Marvel Yukon ethernet chipset. This is what lspci has to tell us: 01:00.0 Ethernet controller: Marvell Technology Group Ltd. 88E8053 PCI-E Gigabit Ethernet Controller (rev 22). The driver to use in the kernel config is “SysKonnect Yukon2 support (EXPERIMENTAL)” (CONFIG_SKY2)

I guess the EXPERIMENTAL tag is warranted for once.

The good news is, that this problem is fixable. The bad news is: It’s tricky to do.

Basically, you have to update the driver with the version that is in the repository of what’s going to be kernel 2.6.19

Getting a current version of sky.c and sky.h is not that difficult. Unfortunately though, the new driver won’t compile with the current 2.6.18 kernel (and upgrading to a pre-rc is out of the question – even more so considering the ton of stuff going into 2.6.19).

So first, we have to patch in this changeset to make the current release of sky compile.

Put the patch to /usr/src/linux and patch with patch -p1

Then fetch the current revision of sky2.c and sky2.h and overwrite the existing files. I used the web interface to git for that as I have no idea how the command line tools work.

Recompile the thing and reboot.

For me, this fixed the problem with the sky2 driver: The machine in question is now running for a whole week without any networking lockups – despite heavy network load at times.

While happy to see this fixed, my statement about not buying too new hardware (posting number 6 here on gnegg.ch – ages ago) if you intend to use Linux on it seems to continue to apply.

Amazing Ubuntu

I must say, I’m amazed how far Ubuntu Linux has come in the last 6 months.

When I tried 5.10 last october, it was nice, but it was still how I experienced Linux ever since I tried it out on the desktop – Flaky: WLAN didn’t work, DVDs didn’t work, Videos didn’t work (well… they did, but audio and video desynched after playing for more than 10 seconds), fonts looked crappy compared to Windows and OS X and suspend and hibernate didn’t work (or rather worked too well – the notebook didn’t come up again after suspending / hibernating).

I know, there were tutorials explaining how to fix some of the problems, but why working through tons of configuration files when I can also just install Windows or OSX and have it work out-of-the box?

Now, yesterday, I installed Ubuntu 6.06 on my Thinkpad T42.

Actually, I tried updating my 5.10 installation, but after doing so, my network didn’t work any longer. And in comparison with Windows and OSX and even Gentoo Linux where the fix is obvious or well documented with useful error messages, I had no chance in fixing it in Ubuntu on short notice.

Seeing that I had no valuable data on the machine, I could just go ahead with the reinstallation.

WPA still didn’t work with the tools provided by default. Now, we all know that WEP is not safe any more and in my personal experience is much flakyer than WPA (connections dropping or not even getting up). How can a system like Linux which is that security-centered not support WPA? Especially as it also works better than WEP.

To Ubuntu’s credit I have to say, that a tool, NetworkManager to fix WPA on the desktop was released post-feature-freeze. If you know what to do, it’s just a matter of installing the right packages to get it to work (and fixing some strange icon resource error preventing the gnome applet from starting).

Aside the connectivity issue (you won’t read any praise for NetworkManager here as a tool like that is nothing special in any other OS which is designed for desktop-use), the Ubuntu Experience was a very pleasant one.

Syspend to RAM worked (Hibernate didn’t – it doesn’t even hibernate). Fonts looked OK. And best of all:

I was able to play Videos (even HD with sufficient performance) and watch a DVD. Hassle-free.

Granted, I had to install some legally not-so-safe packages (with the help of EasyUbuntu which does the hard work for you), but you’d have to do that on any other system aswell, so that’s ok IMHO.

This was a real plesant experience.

And in the whole process I only got three or four meaningless error-messages or stuff not-working silently which is supposed to work according to the documentation.

I’m good enough with computers to fix stuff like that and I had enough time to do it, so I’m not very upset about that, but I’ll only recommend Ubuntu as a real desktop OS once I can install it on a machine and connect to my home network without cryptic error messages and as cryptic fixes (that NetworkManager-bug).

Still: They’ve come a really long way in the past 6 months. Ubuntu is the first Linux distribution ever that manages to play an AVI video and a DVD without forcing me to tweak around for at least two hours.

Computers under my command (2): marle

While everyone keeps calling her Marle, she is actually the princess Nadia of the Kingdom of Guardia in what many people are calling the best console RPG ever made, Chrono Trigger

Chrono Trigger was one of the last RPGs Squaresoft ever did for the SNES and it’s special in many ways: Excellent Music (by Yasunori Mitsuda), excellent graphics, smooth game play, really nice story and: Excellently done characters.

Robo, Frog, Lucca, Marle, Crono, Magus and Ayla – every one of them has its very own style and story. Aside from Crono which is quite the ordinary guy, every one of them is special in its own kind.

The server marle is special on its own way too.

It’s not as outstanding as shion, but it’s special in its own way: It was the first 64Bit machine running a 64Bit OS I’ve ever deployed.

The OS was Gentoo linux (as usual) and the machine itself is some IBM xSeries machine equipped with a 3Ghz Xeon processor and 2GB of RAM, so basically nothing you need 64 Bit for.

It still was an interesting experiment to get the machine to work with a 64 Bit OS, though all that went completely uneventful.

Ever since deployed, marle is running at a customers site without crashes or other problems.

marle ~ # uptime
     11:56:13 up 265 days, 44 min,  2 users,  load average: 0.00, 0.01, 0.00

Not much happening there currently I guess. Also, it’s amazing how quickly time passes – installing that machine feels like it was only yesterday.

Computers under my command – Issue 1: shion

Picture of the "real" Shion Uzuki

After yesterdays fun with one of my servers, I thought I could maybe blog about some of them – especially when they are kind of “special” to me.

Of course, the first machine I’m looking at is my PowerPC Mac mini which I called “Shion”, after the girl Shion Uzuki of the Xenosaga trilogy.

I don’t really have a very advanced naming scheme for my servers, but the important ones get names I tend to remember.

First it was people from Lord of the Rings (with Windows servers having names belonging to the evil people). Then, after I ran out of names, it was places in LotR and after I run out of those too, I began naming (important) servers after girls in console RPGs.

And of all the names, I guess shion is a very fitting name for a server. In the game, Shion is a robotics engineer and the inventor of that android called KOS-MOS.

And in my network, shion has a special place:

I initially bought the machine to run a SlimServer on it as my previous NSLU solution was not really usable as hardware for the heavy, perl-based slim server.

After I replaced the slim-server, I obviously installed a samba server on shion to serve the non-music files as-well. Back then, I only had one external drive connected to the server.

Next thing to get installed was OpenVPN which I used for quite a nice configuration allowing me transparent access from and to the office.

Shortly after that, I finally found a USB ethernet adapter which made shion replace my ZyAir access point. I also had to buy a USB hub back then and I decided to use the remaining two ports of that to plug in additional hard drives, leading to shion’s current disk space capacity of roughly 1.2 TB.

Then I installed mp[3]act (I’ve also blogged about it) and shortly after replaced it with Jinzora due to mp[3]act being quite bug-ridden and not in development any longer. (update 2013: links removed – mp[3]act is now pointing to a porn site and Jinzora is gone)

In all that time (one year of operation), shion never crashed on me. Overall, the stability of my home network went through the roof since switching all tasks over to her: No more strange connection losses. No more rebooting router and cable modem when lots of outgoing connections are active. No more inexplicable slowness in the internal network.

Shion does a wonderful job for me and I would never ever go back to any less flexible or stable solution.

Lately, I thought about maybe ditching her for a more powerful intel-based Mac Mini, but in the end shion is fast enough for my current purpose and I could never ditch a machine as nice as this one.

Flexible, Stable, Fast, Quiet and quite inexpensive. A machine worthy of being referred to with a name and a female pronoun.

Linux, PowerPC, gcc, segmentation fault

If you ask of me me to name the one machine in my possession I love the most, that’ll be my Mac Mini.

It’s an old PPC one, I bought a bit more than a year ago with the intention of installing Linux on it and using it as home-server/router. It’s not the quickest machine there is, but it’s the most quiet and it does its job like no other machine I ever had: Its Samba file server, OpenVPN Gateway, bittorrent client, mp3 streaming server, SlimServer, just all you could ever use a home server for.

From the beginning, it was clear to me: The distribution I’m going to install on the beauty was to be Gentoo Linux. This decision was based on multiple reasons, from hard facts like always current software to soft facts like nice command-prompts.

Basically, the machine just sat there after I installed it, doing its job. Until this week when I wanted to install some software on it – mainly the unrar command to extract some file right on one of the external HDs I plugged in (shion – that’s what the machine is called – is connected to about 1TB worth of external HDs).

Unfortunately, emerge unrar failed.

It failed hard with a SIGSEGV in gcc (or its cousin cc1).

Naturally I assumed there to be some bug in the gcc I originally installed (3.3 something – as I said: I did not touch the installation for a year now) and I tried to reemerge gcc.

… which ALSO failed with a segmentation fault.

I had no interest what so ever in reinstalling the box – I invested much too much time in its configuration. Cron jobs here, certificates there, home grown scripts everywhere. Even with all the backups I had in mind – I did not want to do that kind of job. Besides: Who tells me if it’s really a software problem? Maybe the hardware is at fault which would mean that my work was in vain.

Searching for “gcc segmentation fault ppc” in google is… interesting… but not really something you can do if you actually want a solution for this problem.

In the end, I mentally prepared myself to go on with the reinstallation – still hoping it’d be a software problem.

And by accident, I came across the Gentoo PPC FAQ which I more or less read out of pure interest while waiting for the ISO to be burned.

To my biggest delight, said FAQ was really helpful though as it had a question that went “Why does gcc keep segfaulting during ebuilds?

So it is a kernel problem! Of course I had preemption enabled! And that option – while working perfectly on all my x86 boxes – causes cache corruption on PPC.

Now that I knew what the problem was, I had two possible ways to go on: Quick and dirty or slow, but safe and clean:

  1. Recompile the kernel on the obviously defective machine, hoping the cache corruption would not hit or at least would not lead to a non-bootable kernel to be compiled.
  2. Boot from a Gentoo live-CD, chroot into my installation, recompile the kernel.

Obviously, I took the option 1.

I had to repeat the make command about 20 times as it continued to fail with a segmentation fault here and then. Usually I got away with just repeating the command – the cache corruption is random after all.

I was unable to get past the compilation of reiserfs though – thank god I’m using ext3, so I could just remove that from the kernel and continue with my make-loop.

Rebooting that kernel felt like doing something really dangerous. I mean: If the cache corruption leads to a SIGSEGV, that’s fine. But what if it leads to a corrupted binary? And I was going to boot from it…

To my delight, this worked flawlessly though and I’m writing this blog entry behind the rebooted MacMini-router. This time, even compiling the all new gcc 4.1.1 worked as expected, so I guess the fix really helped and the hardware’s ok.

Personally, I think fixing this felt great. And in retrospect, I guess I was lucky as hell to have read that FAQ – without it, I would have gone ahead with the reinstallation, compiling yet another kernel with preemption enabled which would have led to just the same problems as before.

Maybe the (very talented) Gentoo Hanbook guys should add a big, fat, red (and maybe even blinking) warning to the handbook to tell the user not to enable preemption in the kernel.

I know it’s in the FAQ, but why is it not in the installation handboook? That’s the place you are reading anyways when installing Gentoo.

Still: Problem solved. Happy.

Tweaking Mac OS X for the Linux/Windows user

As you no doubt know by now, I’m gradually switching over from using Windows to using Mac OS X.

I have quite some experience with using Unix and I’d love to have the power of the command-line combined with the simplicity of a GUI here and then.

OSX provides that advantage to me: For one, I’m getting a very styled and time-tested UI, the ability to run most applications I need (this is where Linux still has some problems) and on the other hand, I’m getting a nice well-known (to me) command line-environment.

Of course, in my process of switching over, I made some tweaks to the system, I’m sure some of my readers may find useful:

  • Use a useful default shell: I very much prefer ZSH, so chsh -s /bin/zsh was the first thing I did.
  • Use a useful configuration for said shell: I’m using this .zshrc. It configures some options, enables a nice prompt, fixes the delete-key, sets the path and does other small cosmetical things.
  • Install the developer tools. They are on your install DVD(s).
  • Go and install Fink. No UNIX without some GNU utilities and other small tools. The current source-distribution works perfectly well with the intel macs.
  • Fix the Home- and End-Keys.
  • Tweak the terminal: Open the Window-Settings, chose “Display”, use a reasonable cursor (underline) and set your terminal to Latin-1 (I had numerous problems using UTF with ZSH). If you want, enable Anti-Aliasing. Then chose “Color”, use the “White on Black” preselection and play with the transparency slider. Use the settings as default.
  • Install VLC – your solution for every thinkable multimedia need. Watch out to get the Intel nightly if you have an Intel Mac.
  • I never use sleep-mode because it feels “wrong” not to shut the machine down completely. That’s why I entered sudo pmset -a hibernatemode 1 to make the “Sleep” option in the Apple-Menu work like Hibernate in Windows.

If you are a web developer on an intel mac and consider using PostgreSQL, don’t use the premade builds on entropy.ch because they are still built for PPC. You may use the StartupItem which is provided there though. If you do, call PostgreSQL’s configure like this to get the paths right:

./configure --prefix=/usr/local/pgsql --bindir=/usr/local/bin --with-openssl
--with-pam --with-perl --with-readline --with-libs=/sw/lib
--with-includes=/sw/include

This is after you’ve installed readline using fink. OS X itself does not come with readline and psql without readline sucks.

After installing PostgreSQL with make install, the paths are set correctly for the premade StartupItem, which makes PostgreSQL start when you turn on your machine.

Furthermore, I created my own customized PHP-installation (5.1.2) using the following configure line:

./configure --enable-cli --prefix=/usr/local --with-pear --with-libxml-dir=/sw
--with-apxs=/usr/sbin/apxs --enable-soap --with-pgsql=/usr/local/pgsql
--with-readline=/sw --with-pdo-pgsql=/usr/local/pgsql --enable-pcntl
--with-curl=/usr --enable-ftp --with-gd --with-png-dir=/sw --with-jpeg-dir=/sw
--with-zlib-dir=/usr --with-freetype-dir=/usr/X11R6 --with-bz2

Use fink to install libxml2, libjpeg and libpng

Using the hints provided here, you’ll get a configuration which makes working with the machine much easier for a UNIX/Windows guy. I hope it’s of some use for you.