One day with Serendipity

Here we go: Everything migrated. Every link (hopefully) fixed. Worked around (I think) some problems with images uploaded from MT clashing with Serendipity’s (s9y from now on) mod_rewrite handling and re-categorized every entry: the new gnegg.ch is up and running.

So, how is life with s9y?

Fist of all: I got no single comment SPAM. This is due to the better SPAM countermeasures and due to all URLs changing. I’ll have to see how good the SPAM prevention will work, though I have an idea it can’t be that bad (see below).

While s9y is slower than MT in delivering pages (understandable considering MT is generating static pages), it’s more feature-rich compared to MT – at least if you consider s9y to be a blogging engine, not a framework to create blogging-engine-like tools.

I love the plugin system: There’s nothing you can’t write a plugin for and people seem to have noticed that – at least considering the wealth of plugins available for you to download and install (directly from the administration interface).

Also, because I’m using a premade template and because s9y is a bit more intelligent in reusing templates, the whole site finally has a consistent look. No more usage of outdated templates when commenting or displaying error messages.

The most interesting thing though is the SPAM prevention: When you post a comment, it will go through the following procedure:

  • Is it exactly the same comment as another posted before? If so, reject it. This prevents a spammer that got through once from getting through again. And it prevents you from double-posting by accident.
  • Is your IP-Address posting a comment within 2 minutes after posting another one, the comment will be rejected. I know proxy servers and NAT routers exist and I will tweak the time if I should ever get more popular. A cookie-based approach obviously doesn’t work to flood-protect the blog from malicious spammers.
  • Does the comment point to an URL listed on SURBL, it’ll be rejected. I’m sorry, but this is a sacrifice I must ask for.
  • If you post a comment to an entry older than 30-days, it’ll be insta-moderated. I promise to activate it as soon as possible.
  • If you post to a comment older than 7 days, you’ll have to solve a captcha, just to be sure. If you cannot solve it, feel free to contact me via Email
  • After you post a comment with more than 3 links, I’ll have to approve it first. If you post more than 20 links, it’ll be rejected.
  • A word-filter is active aswell, though I think all these measures stop the spam before even getting here.
  • If all this fails, I’m sure the SPAM will be detected by Akismet

While I know that some restrictions may hurt you, please believe me that the restrictions are in place to both increase the overall quality of content here and to make my life a bit easier.

Serendipity really is a nice blogging engine. Go ahead and try it!

More Asterisk stuff

I thought I’d give a little update on what’s going on in my Asterisk installation as some of the stuff might be useful for you:

Speed Dial

If you have Snom Phones and want to program the function keys to dial a certain number, be sure to select “Speed Dial” and not “Destination” when entering the number.

Destination was used in earlier firmwares but it now used to not only make the phone dial that number, but also subscribe to the line to make the LED light up when the line is used.

This obviously makes no sense at all with external numbers and requires some configuration for internal ones (see below). The additional benefit is that buttons with “Speed Dial” assigned don’t turn on the LED.

Dial by click

You can dial a number from the Mac OS X address book aswell. Asterisk will make your phone ring and redirect the call once you pick up (just like AstTapi on Windows). I had the best experience with app_notify. I don’t quite like the way how it notifies clients of incoming calls (hard-coding IP-Addresses of clients is NOT how I want my network to operate), but maybe there will be a better solution later on. Currently, I’m not using this feature.

Dialing works though.

You don’t have to modify manager.conf, btw, if you already have the entry for the AstTapi-Solution. app_notify will ask for username (manager context) and password when it launches the first time.

Subscription

As noted above, your Snom Phone can be advised to monitor a line. The corresponding LED will blink (asterisk 1.2+) when it’s ringing and light up when the line is busy.

Snom-wise, you’ll have to configure a function key to a “Destination” and enter the extension you like to monitor.

Asterisk-wise you have to make various changes:

sip.conf

  • Add subscribecontext=[context], where context is the context in extensions.conf where the corresponding SNOM phone is configured in. I’ve put this to the [general]-Section because all phones are sharing the same context (internal).
  • Add notifyringing=yes if you have Asterisk >= 1.2 and want to make the LEDs blink when the line is ringing.

extensions.conf
This is a bit hacky: In the sip-context add a notify extension for every line you want to be allowed to be monitored. Unfortunately, you can’t use macros or variables here, so it’s messy.

On my configuration it’s:

[internal]
exten => 61,hint,SIP/61
exten => 62,hint,SIP/62
exten => 63,hint,SIP/63
exten => 64,hint,SIP/64
exten => _6[1-9],1,Dial(SIP/${EXTEN},,tWw)

While I would have preferred

[internal]
exten => _6[1-9],hint,SIP/${EXTEN}
exten => _6[1-9],1,Dial(SIP/${EXTEN},,tWw)

Though this may have been fixed with 1.2.2, but I’m not sure just yet.

You may have to reboot your phone after making the configuration change there. To check the registration in asterisk use SIP show subscriptions.

You should get something like this:

asterisk*CLI> SIP show subscriptions
Peer User Call ID Extension Last state Type
192.168.2.152 62 3c26700b57e 61 Idle dialog-info+xml
1 active SIP subscription

This is not quite tested as of yet because the guy at extension 61 is currently in his office and I don’t want to bother him ;-)

Update while editing/correcting this text: It works. They guy has left and I checked it.

Praise to VLC

Now that I can be assured to have a windows system ready at hand should I need one, I’m more and more switching over to using Mac OS X for day-to-day productivity work – at least if it’s not about doing delphi work.

Now this sounds crazy, but in the end it all boils down to (IMHO) better font rendering and a
alpha-blended terminal.

Functionality-wise and productivity-wise, MacOS and Windows are on par. Both systems have little things that suck and both have advantages in other little things.

In the end, both are OSes.

Today, I was in the position of wanting to listen to the streaming version of OCRemixes once again.

They are using an .ogg-stream which I appreciate because of two things: For one it provides a
better Bandwith:Quality ratio and furthermore ogg’s a patent-free technology.

Problem: How to listen to a OGG-Stream on OS X?

Apples arrogance in regards of QuickTime is one of those things that bother me in OSX. Apple: There’s more to the world of multimedia than just QuickTime and MP3, so make the infrastructure extendable in a sense so it actually works (Hint: DirectShow works quite well – despite being a Microsoft product).

There are some QT/Ogg-plugins available on the net, but none of them (not even one I compiled myself to be 100% sure to have an Intel build) actually worked.

Just when I thought that all was lost, I remembered VLC.

My experience with video already showed it to me: VLC just plays everything you can possibly throw at it. And yes: It managed (and still manages) to play the remixes stream.

And the UI is great on OS X (if you don’t look at the awful preferences dialog).

VLC IMHO is a really nice example how a cross-platfrom UI should be done: It looks like it’s perfectly at home on my OSX. And it ALSO looks like it’s perfectly at home on Windows XP (a bit
minimalistic, but it does it’s job).

And with feeling at home I don’t mean: “It looks the same on both platforms”. No. It’s perfectly adapting to the look & feel of the platform it’s running on. No common theme, no quasi OS-look. It looks as much as your native Mac OS X application as, say, iTunes or TextMate does.

So: Thanks guys. This is great stuff!

Ruby on Rails

Today, our first project done in Ruby on Rails went live.

Christoph has done a wonderful job on it. The only thing I had to do was to fix up some CSS buglets in IE and install a deployment environement (developement was done using the Rails-integrated WEBRick server)

Personally, I think I’d have preferred using LightTPD with FastCGI instead of Apache, but the current setup pretty much prevented me from doing so.

Which is why I’ve installed mod_fastcgi on apache which was very, very easy on Gentoo (emerge mod_fastcgi – as usual).

Once I’ve corrected the interpreter path in dispatch.fcgi (which was set to the location of Christophs developement environment), the thing began working quite nicely.

And fast.

Considering the incredible amount of magic rails does behind the scenes, those 73.15 requests per second I got are very, very impressive (ab -n 100 -c 5). And actually so much faster than a comparable PHP application running using mod_php on a little faster server (19.36 req/s, same ab call).

The results have to be taken with a grain of salt as it’s different machines, different load and a different application.

But it’s similar enough to be comparable for me: the PHP application is running on a framework somewhat similar to rails with lesser optimization but also with lesser complexity. Both benchmarks ran against the unauthenticated start page which comes pretty much down to including some files and rendering a template. No relevant database queries.

I wonder how much of this higher speed is caused by FastCGI (a very convincing technology) instead of running the code in the apache server itself and how much is just rails being faster.

I will set up a test environement which is better defined to actually allow an accurate performance comparison: Comparable application in mod_php, php-fastcgi and rails-fastcgi. And if I have time, I’m going to run the two fastcgi-tests on LightTPD aswell.

Benchmarking is fun. Time-consuming, but fun.

For now, I’m content with the knowledge that an application that took a very small effort to write (even considering that Christoph had to learn the rails environment first) is running fast enough for its intended purpose.

As Christoph said: Rails Rules

thanks, guys

PHP Stream Filters

You know what I want? I want to append one of those nice and shiny PHP stream filters to the output stream.

I have this nice windows-application that recives a lot of XML-data that can be compressed with a very high compression factor. And as the windows application is for people with very limited bandwith, this seems to be the perfect thing to do.

You know, I CAN compress all my output already. By doing something like this:

<?php
ob_start();
echo "stuff";
$c = ob_get_clean();
echo bzcompress($c);
?>

The problem with this approach is that the data is only sent to the client once it’s assembled completely. bzip2 on the other hand is a stream compressor that is very well able to compress a stream of data and send it out as soon as a chunk is ready.

The windows client on the reciving end is certainly capable of doing that. As soon as bytes come in, it decompresses it chunk-wise and feeds it to a Expat based parser which will handle the extracted data. Now I want this to happen on the sending side aswell.

The following code does work sometimes:

<?php
  $fh = fopen('php://stdout', 'w');
  stream_filter_append($fh, 'bzip2.compress', STREAM_FILTER_WRITE, $param);
  fwrite($fh, "Stuff");
  fclose($fh);
?>

But sometimes it doesn’t and produces a incomplete bzip2-stream.

I have a certain idea of why this is happening (no sending out of data to the filter on shutdown), but I can’t prevent it. Sometimes the data is not put out which makes this method unusable.

I’m afraid to report this to bugs.php.net as I’m sure it’s something PHP was not designed for and it’ll get marked as BOGUS faster than I can spell ‘gnegg’.

So this means that the windows-client just has to wait for the data being extracted, converted to xml and compressed.

*sigh*

(thinking of it, there may be this option of outputting data to a temp-file (to which handle a filter is assigned to) and the read it out to the browser immediately afterwards. But come on, this can’t be the solution, can it?)

Update: I’ve since tracked the problem to a bug in PHP itself for which I found a fix. My assumption of writing to a temporary file could help was wrong as PHP itself does not check the return value of a bzlib function correctly and never writes out a half-full buffer on stream close. Neither to the output stream nor to a file.

Asterisk Extended

Playing around with Asterisk, it was inevitable for me to stumble upon AGI.

AGI is a protocol quite like CGI which allows third party applications to be plugged into asterisk, giving them full control over the call handling. That way, even non-asterisk-developers are able to write interesting telephony applications.

One thing I always wanted to do is to set the CallerID on incoming calls. Some numbers are stored in our customer database. There is no reason not to show the customer names on the phones displays instead of only the number.

The snom phones do have a little addressbook, but it’s very limited in both amount of memory and featureset, so it was clear that I’ll have to set the CallerID via Asterisk (SIP allows for transmission of a caller-id. And so does AGI)

Additionally, I thought, it would be very nice to use the swiss phone book at tel.search.ch or even the non-free ETV to try and guess numbers not already in our database.

That scenario is exactly what AGI is for.

As AGI works like CGI, it creates a new process for every call to AGI applications. This is not an option if you want to use interpreted languages. Well. it *is* an option considering our low amount of calls we are getting per time unit, but still. I don’t like to deploy solutions with obvious drawbacks.

Besides, launching a PHP interpreter (I’d have written this in PHP) can easily take a second or so – not acceptable if you want the AGI script to be mandatory on each call. Think of it. You don’t want the caller to wait for your application.

The solution to this is FastAGI, which works like FastCGI: A server keeps running and answers AGI-requests. Like this, you start the interpreter once and just serve the calls in the future. You save the startup-time of the interpreter.

Even better: It allows to run the AGI applications on a different machine than the PBX. This is good because you want the PBX to have as much CPU time slices as possible.

Unfortunately, this made PHP quite unsuitable for me: While it is possible to write a socket server in PHP (ext/posix does exist), I never managed to get it to work as I wanted to. It was slow, unstable and created zombies.

Then I found RAGI which was even better. For quite some time now, I have been looking for an excuse to do something with Ruby on Rails. With RAGI, I finally got it.

Getting the sample provided with RAGI to work was very easy (look at the README file). And reading through that sample file, I was very pleased to see the simplicity of writing a AGI-Application in Ruby (RAGI uses FastAGI, of course).

Now I can finally start hacking away in Rails to create my internal-customer-database / external-phone-lookup application (with some nice caching/timeout handling) to finally show the name behind the calling phone number on the displays of our SNOM phones.

Of course I’m going to provide the sourcecode here once I’m done.

PostgreSQL scales

Via zillablog, I was notified of FeedLounge switching to PostgreSQL

FeedLounge is just another in a serious of webbased services switching their RDBMS away from MySQL.

For one thing, it’s the features that’s driving this. Postgres just has more features and sometimes, you need to have them. Triggers? Views? Until very recently, those features were not available with MySQL.

And when they switch, they notice another thing: PostgreSQL scales very well.

While everyone says that MySQL is optimized for speed and that there’s no database system as fast as MySQL, this is only true for small setups.

In small setups MySQL scores with its ease of use and administration. But as soon as you want more (more features, more users accessing), you will run into MySQL’s limitations and – even more important: MySQL will slow down, it will use lots of RAM and disk space and it even will begin to corrupt it’s tables (a thing a RDBMS should never ever do – not even in case of broken hardware though that’s unavoidable).

PostgreSQL does not have these flaws. It may be a little bit slower under low load, but it speed and reliability scales with its users.

PostgreSQL scales.

PostgreSQL 8.1

A new year, a new announcement of a new version of PostgreSQL, an all-time-member on my favourite tools list.

2002 brought us PostgreSQL 7.3, 2003 brought 7.4 (no announcement on this blog) and 2004 brought us PostgreSQL 8.0 (the date of the blog entries match out of sheer accident – I did not time them at all).

And now it’s time for the next announcement. While the team is a bit early this time (it’s not december the 2nd yet), it once more brings a lot of good stuff.

What’s the most interesting aspect about those PostgreSQL releases: They always bring just the feature I need at the time of the release.

7.1 brought TOAST tables. 7.4 brought autovacuum, 8.0 brought the windows version and now 8.1 brought some needed performance improvements with large tables and large COPY operations (which is what I’m doing currently)

And it’s not just me. Christoph was in the position to have needed something like PHP’s max() function. And what do we learn: 8.1 brings us greatest()

Congratulations to another splendid release, PostgreSQL team. I hope to see you going as strong for the next couple of years.

Once more: PHP and SOAP

I can’t reist: I made my third attempt at getting a SOAP-Server in PHP to work (I only documented my first try here on the blog).

My first try was a little more than two years ago. That one failed miserably.

The next try was last november. I came somewhat further than I did my first time, but Visual Studio was unable to import the WSDL correctly as soon as I was passing arrays of structs around

And now I tried again – this time with PEAR SOAP 0.9.1

This time all looks so much better. First of all, I do this because I really have to: For one of our PopScan customers, we are accessing their IBM DB2 database – currently using a Perl-based server that’s nearing the end of its maintainability, so I deceided to redo it with PHP (PHP-code is somewhat cleaner than Perl code and I’m more fluent in PHP than in Perl)

The DB2-client (especially the one needed for that old 7.1 database) is clumsy, a bit unstable and really not something I want to link into our Apache-Server that serves all our clients.

So the idea was to compile another apache, run it on another port, bound to localhost only. Add PHP with the DB2-client. Access this combo via some way of RPC with the nice DB2-free standard-installation.

Well. And instead of once again designing a custom protocol (like I did for the Perl-Server), I though: Maybe give SOAP another shot.

In contrast to previous experience, this time, it was the Server that worked and the client that was failing. Using PEAR SOAP 0.9.1, creating the server (which creates the dreaded WSDL) went without flaw. This time I was even able to import the WSDL into VS 2003, which I tried just for fun.

Passing around arrays of structs of structs was no problem at all. After building the self::$__typedef and self::$__dispatch_map arrays, passing around those data types has become really intuitive: Just create arrays of arrays in PHP and return them. No problem.

Well done, PEAR team!

This time I’ve had problems with the PEAR SOAP Client. It insisted in passing around ints as strings which the server (correctly) did not like.

Instead of using lots and lots of time debugging that, I went the pragmatical way and used PHP5’s build in SoapClient functionality. No problems there.

And then it suddenly broke

My test-client was written for the CLI version of php which was version 5.0.4. The apache-module of the live-server was 5.0.3.

All I got with 5.0.3 was a HTTP Client Error (SoapFault exception: [HTTP] Client Error).

Whatever I did, it did not go away, but to my delight I have seen that PHP did not even connect to the server to fetch the WSDL. This was good as I was able to debug much quicker that way.

In the end it was the URL of the WSDL. Every version of PHP5 (even the 5.1 betas) – besides 5.0.4 – does not like this:

http://be.sen.work:5436/?wsdl

it prefers this

http://be.sen.work:5436/index.php?wsdl

I ask now: Why is that this way? The first version is a valid URL aswell. The served WSDL is correct – it’s the same file that gets called and it returns totally the same content. This is so strange.

After all, I have to say. SOAP with PHP – after two years – still is not ready for prime time. It’s still in the state of “sometimes working – sometimes not”. But as I now have an environement where it’s known to be working and as I’m in total control of said environement, I will go with SOAP none-the-less. It’s so much cleaner (and more secure: more people than just me are looking at the SOAP-code) than designing yet another protocol and server.

Oh. And the bottom line is: Never trust protocols that call themselves “simple” or “lightweight” ;-)

Firefo^WDeer Park Alpha 1

Yesterday, a developers preview of Mozilla 1.1 was released. To not confuse end users, the’ve called it Deer Park Alpha 1. You won’t see (m)any Firefox-References in the UI.

As always on a major release, extensions and themes tend to break. And as always, you can try to patch (change the MaxVersion) the install.rdf-file in the XPI-file (it’s just a zip-archive) and try to see whether the extension still works. Here’s what I got so far:

  • Installing DeerPark Alpha 1 breaks Firefox. You basically get an unstyled white screen when you start Firefox. This is not great, but unavoidable I suppose.
  • You can patch up the Qute-Theme and it mostly works (install it with this script). The preferences-screen looks funny though (it’s mostly transparent). So if you don’t change any preferences, you can go with qute.
  • The Web Developer toolbar continues to work without patching, though with limited functionality.
  • Download Manager Tweak works as always, though you can’t access its preferences-screen from the preferences dialog (from the extensions window works fine though)
  • Feed Your Reader can be patched up. It does not work any more though
  • Greasemonkey can be patched up. It does not work though. Throws an error when trying to install an user script.
  • Platypus seems to work fine, though it’s useless as Greasmonkey does not.
  • Adblock can be patched and actually continues to work.

This scenario underlines my one problem I’m having with Firefox: They seem to be unable to provide a stable extensions API. On one hand this is a good thing: Cleaning up the API here and then helps getting the product clean and fast. On the other hand, this is bad for the end user. What do you do if your favourite plugin stops being developed and a new browser comes out? Either you don’t use the plugin any more, or you stay with the old release of the browser (I’d do that if adblock would stop working – for example).

But you can’t stay on old versions. Sometime in the future, a security problem will show up. If you are unlucky enough, the older version is not supported any more. So the choice is: Not using the plugin or surfing with an insecure browser.

That’s why I have so few extensions installed. Those I have are popular enough to give me some guarantees that they will be updated. Those I’d like to install that seem to come without the guarantees, I won’t install so I don’t get used to having them available.

This is not the best situation ever. The people at Mozilla should try to stabilize the API somewhat as soon as possible. And they should try to be backward compatible at least for two bigger releases or so.

I will now go and look for people responsible for all those extensions and will try to report them my findings. And hope for the best.