The price of automatisms

Visual Studio 2005 and the .NET Framework 2.0 brought us the concept of table adapters and a nice visual designer for databases allowing you to quickly “write” (point and click) your data access layer.

Even when using the third party SQLite library, you can make use of this facility and it’s true: Doing basic stuff works awfully well and quickly.

The problems start when what you intend to do is more complex. Then the tool becomes braindead.

The worst thing about it is that it’s tailor-made for SQL-Server and that it insists on parsing your queries instead of letting the database or even the database driver do that.

If you add any feature to your query that is not supported by SQL-Server (keep in mind that I’m NOT working with SQL-Server – I don’t even have a SQL-Server installed), the tool will complain about not being able to parse the query.

The dialog provides an option to ignore the error but it doesn’t work like I would have hoped it should: “Ignore” doesn’t mean: “Keep the old configuration”. It means “work as if there wasn’t any query at all”.

This means that even when you want to do something simple as write “insert or replace” instead of “insert” (saves one query per batch item and I’m doing lots of batch items) or just add a limit clause “limit 20” will make the whole database designer unusable for you.

The ironic thing about the limit clause is that the designer certainly accepts “select top xxx from…” which will fail at run time due to SQLite not supporting that proprietary extension.

So in the end it’s back to doing it manually.

But wait a minute: Doing it manually is even harder that it should be because the help, tutorials, books and even google all only talk about the automatic way, either unaware or not caring that it just won’t work if you want to do more than example code.

Oldstyle HTML – the worst offenders

More and more, the WWW is cleansed of old, outdated pages. In more and more cases, the browsers will finally be able to go into standards mode – no more quirks.

But one bastion still remains to be conquered.

Consider this:

<br><font size=2 face="sans-serif">Danke</font>
<br><font size=2 face="sans-serif">Gruss</font>
<br><font size=2 face="sans-serif">xxxx</font>

By accident, I had my email client on “View Source” mode and this is the (complete) body of an email my dad sent me.

Beside the fact that it’s a total abuse of HTML email (the message does not contain anything plain text would not have been able to contain), it’s an obscene waste of bandwith:

The email ALSO contains a text alternative part, effectively doubling its size – not to speak of the unneeded HTML tags.

What’s even worse: This is presentational markup at its finest. Even if I would insist in creating a HTML mail for this message, this would have totally sufficed:

Danke<br />
Gruss<br />
xxxx<br />

Or – semantically correct:

<p>Danke</p>
<p>Gruss</p>
<p>xxx</p>

Personally, I actually see reason behind a certain kind of HTML email. Newsletter or product announcements come to mind. Why use plain text if you can send over the whole message in a way that’s nice for users to view?

Your users are used to viewing rich content – everyone of them probably has a web browser installed.

And with todays bandwith it’s even possible to transfer the image and all pictures in one nice package. No security warnings, no crappy looking layout due to broken images.

What I don’t see though is what email programs are actually doing. Why send over messages like the one in the example as HTML? Why waste the users bandwith (granted: It doesn’t matter any more) and even create security problems (by forcing the email client to display HTML) to send a message that’s not looking any different than one consisting of plain text?

The message also underlines another problem: The old presentational markup actually lent itself perfectly for creating WYSIWYG editors. But today’s way of creating HTML pages just won’t work in these editors for the reasons I outlined in my posting about Word 2007

Still – using a little bit of CSS could result in so much nicer HTML emails which have the additional benefit of being totally readable even if the user has a client not capable of displaying HTML (which is a wise decision security-wise).

Oh and in case you wonder what client created that email…

    X-MIMETrack: Serialize by Router on ZHJZ11/xxxx(Release 7.0.1FP1|April 17, 2006) at
     02.10.2006 16:35:09,
    	Serialize complete at 02.10.2006 16:35:09,
    	Itemize by SMTP Server on ZHJZ05/xxxxx(Release 6.5.3|September 14, 2004) at
     02.10.2006 16:36:15,
    	Serialize by Router on ZHJZ05/xxxxx(Release 6.5.3|September 14, 2004) at
     02.10.2006 16:36:19,
    	Serialize complete at 02.10.2006 16:36:19

I wonder if using a notes version of september 04 is a good thing to do in todays world full of spam, spyware and other nice things – especially considering that my dad is working in a public office.

My new Flat – Location

As I’ve told before, I’m moving into my very own flat quite soonish.

I can’t show pictures of the interior just yet as the current owners have not moved out yet. What I can show you though is a picture of the surroundings:

The picture was ripped off the GIS Browser Zürich provides for us. I could have used map.search.ch (which had AJAX before google maps and also has a prettier zoom than its hyped counterpart, btw) and I could even have created a link, but that would kind of give away my address (and the images of the GIS browser have a much higher resolution).

But now to the flat itself:

The green stuff to the north of the building is forest. And there’s a nice creek flowing through it (in a more or less straight east -> west line). The forest also is quite big: It takes you about 2 hours to walk from the entrance on the west to the exit on the east.

Additionally, my parents live in the vicinity the forests top end, so it’ll be a very nice walk for me when I visit them and decide to go by foot or bike.

Forest, no streets… way off the city life?

Not at all: The place is located near Zürich and I reach my work place by train (Forchbahn even) in only just 9 minutes – or 20 if I decide to walk through the forest.

So I’m getting the best out of two worlds: Nature literarily just outside my front door (I’ll be getting myself a cat next year) and still closer to my work space than before. And about the same distance away from the central parts of Zürich as I’m right now.

Granted: Walking home right now is more or less walking in-plane when it will be uphill later on, but it will be in the middle of the forrest, aside a creek as oppsed to a walk through the city.

But that’s not all just yet.

It’s very nearby the place where I’ve grown up.

Despite moving away from there back in 1993, I never bonded as much to any other place. That old place still feels like home to me and I’m getting warm feelings whenever I’m passing by.

Now I’m moving to a place where I was playing when I was a kid – granted, we weren’t there every free minute as it was a bit off, but we visited that forest here and then – we even once played quite close to where the house is.

And only three years ago, I used dry-ice to make bottles of PET explode – right in the same forest – also quite near the place where I’ll be living.

All these features make this flat the truly amazing thing it is. Granted: Room for a nice home cinema, a large bathtub, a Squeezebox in every room, heck, 140m2 of room – all that is nice. But what really makes the flat special is its location.

November 1st, I’ll officially be its owner and then I’ll be able to post some pictures from the inside.

OS X 10.4.8 – Update gone wrong

Today, Software Update popped up and offered me to upgrade the OS to 10.4.8.

Usually I’m turning down such offers as I don’t want to reboot my system in mid-day, but it felt like a good time to do it none the less. This is why I accepted.

After the installation, the update asked me to reboot which I accepted.

What came afterwards was as scary as it was ironic: The system rebooted into Windows XP.

But not worries: The 10.4.8 update isn’t a windows installation in disguise: The Windows installation that greeted me was the one I have on a second partition – mostly to play WoW (which I don’t any more).

A quick reboot showed me even more trouble: Whenever my MacBook tried to boot from the MacOS partition, it showed the folder-with-question-mark icon for a few seconds and then the EFI BIOS emulation kicked in and booted from the MBR, which is why I was seeing Windows on my screen.

Now, I’d gladly explain here what has gone wrong and how I fixed it, but as I was in a state of panic, so I have not exactly documented my fix and as I tried many steps at once without getting confirmation if the step has fixed the problem, I don’t even know what was wrong (which certainly doesn’t stop me from guessing).

Anyways.

I booted from the MacBook DVD and first selected disk utility in the tools menu and let it check the disk for errors (none found as I have expected) and then let it repair permissions (tons of errors found, but I doubt this was the problem).

Then I quit the disk utility and launched terminal.

Beside the fact that I had some trouble actually entering commands (how do I set the keyboard layout in that pre-install-terminal?), I quickly went to /System/Libary, deleted the Extensions cache (Extensions.kextcache), went to /System/Library/Exentsions and removed all Extensions installed by Parallels (which I suspected being responsible for the problem).

I think the list was vmmain.kext, helper.kext, Pvsnet.kext and hypervisor.kext. You have to remove them with rm -r as they are bundles (directories)

After that, I rebooted the system and the question-mark-on-a-folder disappeared and the updating process completed.

I can’t tell you how scared I was: My OS X installation is tweaked to oblivion and I’d really, really hate to lose all the stuff. Don’t mind the data – it’s configuration files and utilities and of course fink.

*shudder*

As I have not tried to reboot after completing each of the steps above, I’m unable to say what actually caused the problem. I doubt it was Parallels though as I’m currently running 10.4.8 and Parallels (which I had to reinstall of course). I also doubt it was the permissions issue as wrong permissions are unlikely to cause boot-failure.

So it probably was a corrupted Extension cache. Or the update process not able to cope with the Parallels extensions.

Me being in the dark makes me unable to place blame, so you won’t find any statement about how a more or less forced OS update should never cause a failure like this…

For all I know, this could have happened without the update anyways.

The good news on the other hand is that I’m slowly reaching a state where I am as good at fixing macs as I am good at fixing Windows and Linux. Just don’t tell this to my friends who have macs.

Correlation between gnegg.ch and WoW

If you take a look at the archive (a feature I’ve actually only discovered just now), you’ll notice quite an interesting distribution of posts here on gnegg.ch

2002 was where all started. November was still a bit slow, but in December I really got into blogging only to let it slip a bit during 2003.

2004, I began subscribing to tons of RSS feeds which provided me with a lot of inputs for my own articles. You’ll notice a significant increase of posts during the whole year.

Then, in 2005, my WoW-time began. My first WoW-related posting was from February 21st, 2005 and makes a reference to when I bought WoW, which would be – provided I’m calculating correctly – February 15th 2005.

Going back to the archive, you’ll immediately notice something happening to the post count: It’s steadily going down. From a good 9 entries in January (pre-WoW) down to one entry in October which is more or less when I got my first character to level 60. In November I was affected by my first fed-up-ness of WoW which lasted till January 2006 (post count coming up again – despite having christmas and all which was keeping me away from computers.

Then, in January, I was playing again, getting closer to 60 with my second character in February (just one posting).

March was WoW-less again due to my feeling of not having anything to do any more.

In mid-April, I began playing again and started my third character… (posts going down) – which I got 60 with at the end of May.

June was playing at 60 and before the end of the month, I began feeling fed-up with WoW. And burned out. I clearly felt to have wasted way too much of my life. And I felt like I was truly addicted to WoW. So I used the emergency break and stopped playing.

As you can see, I was back to 16 posts in July which also was due to my “Computers under my command”-series which was easy to do due to the topics being clear in advance.

August is interesting. Have a look at the month calendar and guess when I took my lv60 character out again!

More or less regular postings here until August 10th. Then nothing.

September is better again because I put my WoW to a deep-freeze again – especially after having seen what WoW does to my other hobbies. gnegg.ch is a very nice indicator in that regard.

So I’m coming to all the same conclusion as Adam Betts who also stopped playing WoW due to noticing his real life being severely affected by WoW.

World of Warcraft is highly addictive and I know of no person who could say not being affected by this. Once you start to play, you play. Even worse: Even if you think that you got it behind you and that you can control it, it just takes over again.

So for me it’s clear what I have to do: I will stop playing. For real this time. No taking out my character again. No-more-playing. I won’t delete my characters as they are the result of a lot of work, but I will cancel my subscription.

I’m really grateful for the archive function of gnegg.ch as it was a totally clear indicator of my addiction and it still is a perfect way to prevent me from going back as everyone will know I have due to the post count going down again.

SQLite, Windows Mobile 2005, Performance

As you know from previous posts, I’m working with SQLite on mobile devices which lately means Windows Mobile 2005 (there was a Linux device before that tough, but it was hit by the RoHS regulation of the European union).

In previous experiments with the older generation of devices (Windows CE 4.x / PocketPC 2003), I was surprised by the high performance SQLite is able to achieve, even in complex queries. But this time, something felt strange: Searching for a string in a table was very, very slow.

The problem is that CE5 (and with it Windows Mobile 2005) uses non-volatile flash for storage. This has the tremendous advantage that the devices don’t lose their data when the battery runs out.

But compared to DRAM, Flash is slow. Very slow. Totally slow.

SQLite doesn’t load the complete database into RAM, but only loads small chunks of the data. This in turn means that when you have to do a sequential table scan (which you have to do when you have a LIKE ‘%term%’ condition), you are more or less dependant on the speed of the storage device.

This what caused SQLite to be slow when searching. It also caused synchronizing data to be slow because SQLite writes data out into checkpoint files during transactions.

The fix was to trade off launch speed (the application is nearly never started fresh) for operating speed by loading the data into an in-memory table and using that for all operations.

attach ":memory:" as mem;

create table mem.prod as select * from prod;

Later on, the trick was to just refer to mem.prod instead of just prod.

Of course you’ll have to take extra precaution when you store the data back to the file, but as SQLite even supports transactions, most of the time, you get away with

begin work;

delete from prod;

insert into prod (select * from mem.prod);

commit;

So even if something goes wrong, you still have the state of the data of the time when it was loaded (which is perfectly fine for my usage scenario).

So in conclusion some hints about SQLite on a Windows Mobile 2005 device:

  • It works like a charm
  • It’s very fast if it can use indexes
  • It’s terribly slow if it has to scan a table
  • You can fix that limitation by loading the data into memory (you can even to it on a per-table basis)

Word 2007 – So much wasted energy

Today, I’ve come across a screencast showing how to quickly format a document using the all new Word 2007 – part of office 2007 (don’t forget to also read the associated blog post).

If you have any idea how Word works and how to actually use it, you will be as impressed as the presenter (and admittedly I) was: Apply some styles, chose a theme and be done with it.

Operations that took ages to get right are now done in a minute and it’ll be very easy to create good looking documents.

Too bad that it’s looking entirely different in practice.

If I watch my parents or even my coworkers use word, all I’m seeing is styles being avoided. Heading 1? Just use the formatting toolbar to make the font bigger and bold.

Increase spacing between paragraphs? Hit return twice.

Add empty spacing after a heading (which isn’t even one from Word’s point of view)? Hit return twice.

Indent text? Hit tab (or even space as seen in my mother’s documents).

This also is the reason why those people never seem to have problems with word: The formatting toolbar works perfectly fine – the bugs lie in the “advanced” features like assigning styles.

Now the problem is that all features shown in that screencast are totally dependent of the styles being set correctly.

If you take the document shown as it is before you apply styling and then use the theme function to theme your document, nothing will happen as word doesn’t know the semantic data about your document. What’s a heading? What’s a subtitle? It’s all plain text.

Conversely, if you style your document the “traditional” way (using the formatting toolbar) and then try to apply the theme, nothing will happen either as the semantic information is still missing.

This is the exact reason why WYSIWYG looks like a nice gimmick at the first glance, but it more or less makes further automated work on the document impossible to do.

You can try and hack around this of course – try to see pattern in the user’s formatting and guess the right styles. But this can lead to even bigger confusion later on as you can make wrong guesses which will in the end make the themeing work inconsistently.

Without actually using semantic analysis of the text (which currently is impossible to do), you will never be able to accurately use stuff like themeing – unless the user provides the semantic information by using styles which in turn defeats the purpose of WYSIWYG.

So, while I really like that new themeing feature of Office 2007, I fear that for the majority of the people it will be completely useless as it plain won’t work.

Besides, themes are clearly made for the end user at home – in a corporate environment you will have to create documents according to the corporate design which probably won’t be based on a pre-built style in office.

And end users are the people the least able to understand how assigning styles to content works.

And once people “get” how to work with text styles and the themes will begin to work, we’ll be back at square one where everyone and their friends are using all the same theme because it’s the only one looking more or less acceptable, defeating all originality initially in the theme.

Upgrading the home entertainment system

Upgrading the home entertainment system

The day when I will finally move into my new flat is coming closer and closer (expect some pictures as soon as the people currently living there have moved out).

Besides thinking about outdated and yet necessary stuff like furniture, I’m also thinking about my home entertainment solution which currently mostly consists of a Windows MCE computer (terra) and my GameCube (to be replaced with a Wii for sure).

The first task was to create distance.

Distance between the video source and the projector. Currently, that’s handled simply by having the MCE connected to the projector via VGA (I’d prefer DVI, but the DVI output is taken by my 23″ cinema display I) and the GC, the PS2 and the XBox360 via composite to my receiver and the receiver via composite to the projector.

The distance between the projector and the receiver/MCE is currently about three meters tops, so no challenge there.

With a larger flat and a ceiling mounted projector, interesting problems arise distance-wise though: I’m going to need at least 20 meters of signal cable between receiver and projector – more than what VGA, DVI or even HDMI are specified for.

My solution in that department was the HDMI CAT-5 Extreme by Gefen. It’s a device which allows sending HDMI signals over two normal ethernet cables (shielded preferred) and reaching up to 60 meters of distance.

Additionally, CAT-5 cables are lighter, easier to bend and much easier to hide than HDMI or even DVI cables.

Now, terra only has a DVI and VGA out. This is a minor problem though as HDMI is basically DVI plus audio, so it’s very easy to convert a DVI signal into a HDMI one – it’s just a matter of connecting pins on one side with pins on the other side – no electronics needed there.

So with the HDMI CAT-5 Extreme and a DVI2HDMI adaptor, I can connect terra to the projector. All well, with one little problem: I can’t easily connect the GameCube or the other consoles any more: Connecting them directly to the projector is no option as it’s ceiling mounted.

Connecting them to my existing receiver isn’t a solution either as it doesn’t support HDMI, putting me into the existing distance problem yet again.

While I could probably use a very good component cable to transport the signal over (it’s after all an analog signal), it would mean I have three cables going from the receiver/MCE combo to the projector: Two for the HDMI extender and one big fat component cable.

Three cables to hide and a solution at the end of its life span anyways? Not with me! Not considering I’m moving into the flat of my dreams.

It looks like I’m going to need a new receiver.

After looking around a bit, it looks like the DENON AVR-4306 is the solution for me.

It can upconvert (and is said to do so in excellent quality) any analog signal to HDMI with a resolution of up to 1080i which is more than enough for my projector.

It’s also said to provide excellent sound quality and – for my geek heart’s delight – it’s completely remote-controllable over a telnet interface via its built-in ethernet port – even bidirectional: The – documented – protocol provides events on the line when operating conditions change by different events, like the user changing the volume on the device.

This way, I can have all sources connected to the receiver and the receiver itself connected to the projector over the CAT-5 Extreme. Problems solved and considering how many input sources and formats the denon supports, it’s even quite future-proof.

I’ve already ordered the HDMI extender and I’m certainly going to have a long, deep look into that Denon thing. I’m not ready to order just yet though: It’s not exactly cheap and while I’m quite certain to eventually buy it, the price may just fall down a little bit until November 15th when I’m (hopefully) moving into my new home.

Windows Vista, Networking, Timeouts

Today I went ahead and installed the RC2 of Windows Vista on my media center computer.

The main reason for this was because that installation was very screwed (as most of my Windows installations get over time – thanks to my experimenting around with stuff) and the recovery CD provided by Hush was unable to actually recover the system.

The Hard Drive is connected to a on-board SATA-RAID controller which the XP setup does not recognize. Usually, you just put the driver on a floppy and use setup’s capability of loading drivers during install, but that’s a bit hard without a floppy drive anywhere.

Vista, I hoped, would recognize the RAID controller and I read a lot of good things about RC2, so I thought I should give it a go.

The installation went flawlessly, though it took quite some time.

Unfortunately, surfing the web didn’t actually work.

I could connect to some sites, but on many others, I just got a timeout. telnet site.com 80 wasn’t able to establish a connection.

This problem in particular was in my Marvel Yukon chipset based network adapter: It seems to miscalculate TCP packet checksums here and there and Vista actually uses the hardwares capablity to calculate the sums.

To fix it, I had to open the advanced properties of the network card, select “TCP Checksum Offload (IPv4)” and set it to “Disabled”.

Insta-Fix!

And now I’m going ahead and actually start to review the thing

lighttpd, .NET, HttpWebRequest

Yesterday, when I deployed the server for my PocketPC-Application to an environment running lighttpd and PHP with FastCGI SAPI, I found out that the communication between the device and the server didn’t work.

All I got on the client was an Exception because the server sent back error 417: Precondition failed.

Of course there was nothing in lighttpd’s error log, which made this a job for EtherealWireshark.

The response from the server had no body explaining what was going on, but in the request-header, something interesting was going on:

Expect: 100-continue

Additionally, the request body was empty.

It looks like HttpWebRequest, with the help of the compact framework’s ServicePointManager is doing something really intelligent which lighttpd doesn’t support:

By first sending the POST request with an empty body and that Expect: 100-continue-header, HttpWebRequest basically gives the server the chance to do some checks based on the request header (like: Is the client authorized to access the URL? Is there a resource available at that URL?) without the client having to transmit the whole request body first (which can be quite big).

The idea is that the server does the checks based on the header and then either sends a error response (like 401, 403 or 404) or it advises the client to go ahead and send the request body (code 100).

Lighttpd doesn’t support this, so it sends that 417 error back.

The fix is to set Expect100Continue of System.Net.ServicePointManager to false before getting a HttpWebRequest instance.

That way, the .NET Framework goes back to plain old POST and sends the complete request body.

In my case that’s no big disadvantage because if the server is actually reachable, the requested URL is guaranteed to be there and ready to accept the data on HTTP-level (of course there may be some errors on the application level, but there has to be a request body for them to be detected).