Debugging PocketPCs

Currently I’m working with Windows Mobile based barcode scanning devices. With .NET 2.0, actually developing real-world applications for the mobile devices using .NET has become a viable alternative.

.NET 2.0 combines sufficient speed at runtime (though you have to often test for possible performance regressions) with a very powerful development library (really usable – as compared to .NET 1.0 on smart devices) and unbeatable development time.

All in all, I’m quite happy with this.

There’s one problem though: The debugger.

When debugging, I have two alternatives and both suck:

  1. Use the debugger to connect to the real hardware. This is actually quite fast and works flawlessly, but whenever I need to forcibly terminate the application (for example when an exception happened or when I’m pressing the Stop-Button in the debugger), the hardware crashes somewhere in the driver for the barcode scanner.

    Parts of the application stay in memory and are completely unkillable. The screen freezes

    To get out of this, I have to soft-reset the machine and wait half a century for it to boot up again.

  2. Use the emulator. This has the advantage of not crashing, but it’s so slow.

    From the moment of starting the application in VS until the screen of the application is loaded in the emulator, nearly three minutes pass. That slow.

So programming for mobile devices mainly contains of waiting. Waiting for reboots or waiting for the emulator. This is wearing me down.

Usually, I change some 10 lines or so and then run the application to test what I’ve just written. That’s how I work and it works very well because I get immediate feedback and it helps me to write code what’s working in the first place.

Unfortunately, with these prohibitive long startup times, I’m forced to write more and more code in one batch which means even more time wasted with debugging.

*sigh*

My take on the intellectual property debate

Despite the fact that I fear I’m not totally qualified to have an opinion regarding the ongoing debate over intellectual property, sometimes, I think about the problem too and I certainly do have an opinion.

To say it with the tongue of the usenet, IANAL, but bear with me when I finally take the time to write down my own ideas on the IP debate:

When you take a look at todays landscape, you’ll clearly see clashing interests. On one side, you have the authors (I am one of these in a sense – I write software) that more or less wish to make a living with their work. Then you have the people selling the work created by the authors and then you have the consumers which should pay to actually consume the work produced.

Of course, we don’t want either the authors nor the resellers to starve to death, so there must be some incentive for the consumers to actually consume the goods and to compensate for the authors and even more so for the distributors work.

That’s what we have created the term intellectual property for.

Even though you as the consumer get to consume the work of the author, that’s all you can do. In theory, you can’t resell, redistribute, copy or whatever else you’d want to do with the work of the initial author. You pay for your right to consume the initial work. If you want to do more (like creating a derivative work), you naturally have to pay more (per copy of that derived work you distribute) – at least that’s what society works like.

Let me make an example. DRS, the swiss national radio station created wonderful audio plays about a certain private investigator called «Franz Musil». The first two parts of the series of plays (currently, there are five of them if I counted correctly) will never ever be available on CD for us consumers to buy:

In the production they used tiny pieces of music for which they don’t have the license to sell on CD.

Even though the original part of that audio play is immense compared to those small pieces of music, the original publisher of the pieces in question still has a say on the distribution on something completely different and orignial that has come out of the initial work.

Later audio plays contain music they created themselves and these plays are actually available to buy on CD. This whole situation is bad for us the consumers (the plays are really good), DRS (they’d like to sell their original work) and the initial author of the music in question (because fewer people now hear his work).

Especially in the matters of software, it gets even worse tough: While copyright law protects the work as a whole, there’s the discussion about patents that actually manages to protect bits and pieces of your idea as an author.

Let’s say I write a poem and I distribute that using the old and known methods (via some publisher), then that poem is protected by the publishers copyright (I had to sign off all rights I had on the poem to that company for them to do the work).

If someone takes my publishers poem (remember, it’s not any longer my poem. It’s the publishers), sets his own name below it and sells it, then he violates my publishers copyright. So far so good.

But imagine that my publisher went further ahead and besides taking all rights to my work also patented the «method or apparatus to put letters in context to form a meaning»… (don’t laugh – todays understaffed and underqualified patent offices can clearly be fooled into granting such a patent)

Now my publisher not only made sure that my poem can’t be copied, they also made sure that no one else will ever be able to write a poem by lining up characters.

Now let’s go ahead to distribution to consumers, but let’s stay with my poem (which is the only poem in existence due to act of spelling now also being my distributors property).

Naturally, my distributor wants to maximize the cash they can make with their newly acquired poem. On one hand, they have expensive lawyer-bills to pay and on the other, they try to use their new poem to get back the money wasted on less successful poems that came before the one I have initially written (just to say it once and for all: I don’t write poems. And if I would, I would never assign the copyright to a publisher).

Now, for a poem, you have a fixed-sized group of recipients: People capable of reading (and thus violating that patent granted earlier) and interested in poems.

So to maximize income, the publisher must make sure that everyone of the targeted group goes ahead and pays the distributor that new poem. Besides advertising for it to reach an initial amount of people, the publisher makes sure that everyone reading that poem pays for doing so one way or another.

One way is to sell books. The other is to publicly perform the poem, while getting payed both from entrance fees and third party sponsors. Or they create an audiobook and sell that.

Of course, if the publisher sells a book to one person, they obviously would want to sell another book to a friend of that person. This is why copying is disallowed.

To further maximize profits, the publisher now sees a way to make the initial person actually buy more than one copy of the same book: A book you buy destroys itself after a set number of days. And you can only read the book while in one predefined room. When you move to another location, the book renders itself unreadable.

All that magic protecting that book can of course go wrong due to various reasons and in that case, the publisher can make the person go ahead and just buy another copy of the same book…

And this is what’s fundamentally wrong.

People are not used to not own something they pay for.

When I buy myself an apple, I can eat it when I want and where I want. When I buy myself furniture, I can place it where I want and I can sell it to whomever I want. But when I buy a piece of music in the iTunes music store (using this as an example because it’s well-known), then I can only hear it on so many devices. If I buy the n-th new computer, I need to buy the song again. Also, I cannot resell the song. And one day, when Apple is gone or running the Music Store is no longer interesting for them, my Songs will stop working too.

When I buy a book, it’s my responsibility to handle it with care and if I succeed in doing that, then the book I buy today is still readable in hundreds of years. No external influence not ultimately under my control can take away that book from me. No company going out of business, no company losing interest in providing me with a “license” to read my book.

The more time passes, the more patents are granted and the more strict DRM is put in place.

And – now we finally come to the core of the whole thing – the more strict distribution of new content is handled, the more expensive creating derivative work gets, the more our society gets stuck.

I postulate that no person is able to create truly original works. Everything one creates is influenced by outside factors. News postings. Books. Music. Other software: Either you accept that outside influence and improve upon that or you get slowed down more and more, always hitting walls because “someone was already there”.

With enforcing distribution limitation and patenting and thus restricting the building blocks of future work, society slows down scientific and cultural evolution. Or it passes control over that evolution fully to big distribution companies that actually have the money to pay all the royalties needed.

Individual authors (no matter what profession) lose their capability of creating and releasing novel work because each and every possible building block is protected and owned by a big company.

The final goal of the current society will be a conglomerate of two to three big companies owning all rights to all new scientific and cultural advancement. These companies will be constantly paying themselves royalty fees for the patents and copyrights they violate between each other.

If you want to be an author, you are not allowed to create any work until you have a contract with one of these big companies. Working will only be possible in close proximity to a lawyer because the big companies still want to maximize their earnings and thus watch closely to minimize the cost of the new work created.

When we reach that point, all advancement of civilization (which is by a big part defined by advancement of culture) comes to a halt and we end up back in the middle ages where only a few enlightened people (monks) where able to create cultural works (because only they could write). Everyone else had to work for their survival and pay taxes.

In an ideal world, copyright and patent law gets radically changed by allowing to freely create derivative works as long as there is a certain percentage of new content in the created work and the original content is attributed to.

Let’s say 60%, though this obviously must be tweaked by people far more intelligent than I am.

If I write a poem, in the ideal world, I can keep the copyright and I can distribute it however I feel. Or I can ask a publisher to do that work for me while I keep the initial copyright on it. The more work the distributor has to do to advertise my work, the more I will be paying him. No changes here, beside the fact that I retain the copyright.

The distributor still tries to sell the product. But as creating derivative works is now permitted in some boundaries, expenses for both legal and technical protection go down. The publisher can once again focus on what they were payed to do in the first place.

If someone really likes my poem, she can go ahead and take it to create a new, better poem. Maybe longer. Maybe with a completely different message. Maybe the new author just takes out a verse or two. Maybe the whole poem. It doesn’t matter.

When she is finished, she roughly checks that there’s 60% of novel art in it and then goes ahead to distribute the poem – either herself or via a publisher.

This model, by the way, works. It’s in use today. Everyday. It’s an invention by geeks like you and me. It’s called Free Software. It doesn’t even have a limitation that defines a percentage of new content to allow for redistribution under ones own copyright.

Despite creating a platform where knowledge can be openly shared, people are still able to make a living out of their work. The money is in the services rendered for a specific need. Customize a piece of software for a specific working environment. Publicly present that poem from the samples above at some poetry event. Provide the end user with a package of multiple poems collected together in one book…

There are so many things still to do and which are completely doable without forcing all scientific and cultural advancement of society to stop or at least go through a lawyer and through courts.

We are the new generation. It’s our task to see the shortcomings of the current system. It’s our task to see opportunities to create a new and better system.

It’s our task to fix this problem once and for all.

The whole Free Software movement is a big step in the right direction. Thank you, Free Software community. You show us the way we all have to go.

Let’s move!

Quality of video game consoles

First, there was The Red Ring Of Death, then we got the beep of death and now we got the Error 110213 of death.

What is it with modern game consoles?

Remember the NES? Plug in, turn on, play.

I know so many people who owned or still own a NES. Not one of them ever had a defective device.

Same goes for the SNES. Or any other console.

Is this obvious degrade in quality the price of ever increasing complexity? Is this the price of abstraction?

I wonder: What will ultimately be the end of ever increasing evolution in technical devices as we know them today: Is it physical limitations like the theory of relativity or is it the plain inability of our brains to comprehend the complexity of the devices we create?

Living without internet at home

When your fuse box looks like the one on this photo and when your bedroom wall looks like this then you can be sure of one thing: You don’t have power.

What’s more interesting though is that for one time in my whole life, Cablecom did something right: Three months ago, I had them move my cable internet access from my old address to the new one by November, 15th

The problem is that you have to do this three months in advance and back then, I wasn’t sure how long the renovation of my bathroom was going to take. So I guessed.

Of course that guess turned out to be wrong: The bathroom, while making splendid progress, is still two weeks off from being completed.

But there was no way to explain that to Cablecom.

They successfully switched over my internet connection from my current flat to the new one where I don’t have my stuff, some essential parts of my furniture (like my bed), and even worse: No power, no water, no toilet (that is currently lying on the balcony waiting for the bathroom to be completed before they can replug it).

So for now, Internet is something I can only have at work.

The irony is that usually, Cablecom screws everything up you may want from them. Their internet access is flawless and always working, but whenever you have any administrative request, you can be sure that they screw up.

To underline this, I have two nice conversations with them:

Me: Why do I not get any bills from you? As much as I like not paying for your service, I’d hate you turning it off because I’m not paying for it. Please start sending me bills!

Them: What’s your customer ID?

Me: No Idea. But my name is Philip Hofstetter and I live at …

Them: Let me check…

Them: Are you sure that you are our customer? I can’t find you here…

Me: Totally sure. Yes.

Them: That just can’t be.

Me: And yet it is: As a matter of fact, I’m currently using the phone you have sent to me calling over your connection you provide me with and I’d really like to pay for.

Them: Sure?

That episode ended with me getting one hell of an envelope containing about 20 bills. I’m sure that had I not called, I would have been able to surf and phone for free, but I didn’t want to take the risk of ending up with no internet and no way of getting it back. Besides, not paying for a service used is unfair for both the provider and the other people who are forced to pay.

The other episode was shorter and happened to Ebi, a friend of mine:

Ebi: Hello, I have a question: What is my customer ID? My Name is xxx and I live in xxx

Them: No problem. Can I first have your customer ID though?

Other episodes turn around redundant modems being delivered, about accounts where multiple bills are sent for the same service, about not being able to fix an obvious defect at the in-house repeater or, a CHF 100’000+ water damage caused by them not sealing a pipe properly (their insurance payed for that of course).

Still: Their internet service is kick-ass! No downtime. Maximum speeds. No forced disconnection. No forced reverse proxy or other crap.

That’s why I prefer them to any ADSL provider out there.

It’s just ironic that a company this prone to screwing up administrative tasks actually does the right thing that one time where some delay would not have mattered – or would even have been preferred.

Well… at least I have one more reason to be looking forward to december now.

ServeRAID – Fun with GUI-Tools

We’ve recently bought three more drives for our in-house file server. Up until now, we had a RAID 5 array (using a IBM ServeRAID controller) spawning three 33GB drives. That array recently got very, very close to being full.

So today, I wanted to create a second array using the three new 140GB drives.

When you download the ServeRAID support CD image, you get access to a nice GUI-tool which is written in Java and can be used to create Arrays on these ServeRAID controllers.

Unfortunately, I wasn’t able to run the GUI at first because somehow, the Apple X11 server wasn’t willing/able to correctly display the GUI. I always got empty screens when I tried (the server is headless, so I had to use X11 forwarding via ssh).

Using a Windows machine with Xming (which is very fast, working perfectly and totally free as in speech) worked though and I got the GUI running.

All three drives where recognized, but one was listed as “Standby” and could not be used for anything. Additionally, I wasn’t able to find any way in the GUI to actually move the device from Standby to Ready.

Even removing and shuffling the drives around didn’t help. That last drive was always recognized as “Standby”, independant of the bay I plugged it into.

Checking the feature list of that controller showed nothing special – at first I feared that the controller just didn’t support more than 5 drives. That fear was without reason though: The controller supports up to 32 devices – more than enough for the server’s 6 drive bays.

Then, looking around on the internet, I didn’t find a solution for my specific problem, but I found out about a tool called “ipssend” and there was documentation how to use it in an old manual by IBM.

Unfortunately, newer CD images don’t contain ipssend any more, Forcing you to use the GUI which in this case didn’t work for me. It may be that there’s a knob to turn somewhere, but I just failed to see it.

In the end, I found a very, very old archive at the IBM website which was called dumplog and contained that ipssend command in a handy little .tgz archive. Very useful.

Using that utility solved the problem for me:

# ./ipssend setstate 1 1 5 RDY

No further questions asked.

Then I used the Java-GUI to actually create the second array.

Now I’m asking myself a few questions:

  • Why is the state “Standby” not documented anywhere (this is different from a drive in Ready state configured as Standby drive)?
  • Why is there no obvious way to de-standby a drive with the GUI?
  • Why isn’t that cool little ipssend utility not officially available any more?
  • Why is everyone complaining that command line is more complicated to use and that GUIs are so much better when obviously, the opposite is true?

The atmosphere in good games

I’m a big fan of the Metroid series.

It took me a long while to get used to it though. Back in the day, where there was just Metroid, I never got very far in it – and I’ve only seen the game running at a friends home.

Then came the emulators and I gave Super Metroid a shot, but I didn’t get it. I didn’t know what to do, where to go and how to progress – the whole thing didn’t make any sense to me.

Then came Metroid Fusion on the GBA which I actually bought.

And this was when I got it.

The concept is the same as it’s in Zelda: You walk as far as you can go with your current equipment, you get better equipment, opening new paths and then finally, you meet the last boss.

Of course there’s another element to a real Metroid game: Brilliant level design. The designers have thought of so many places where you can “cheat” and break the obvious sequence of events. Doing so varies in difficulty from quite difficult to pull off at first but easy later on till insanely hard to do.

Metroid Fusion is a bit off in this regard though – its sequence is quite linear and there’s only one relevant part in the game where you can skip some content and are rewarded with some extra movie sequence. Additionally, it’s hard as hell to pull of. Much, much harder than the linked video may make you think as it’s dependent on your reaction in tenths of seconds.

But now to the topic: Metroid Prime. And Prime Echoes.

When I started with Prime, I had the same problem as I had when I started with the 2D Metroids: I had no idea where to go, what to do or even finding out how to navigate the world.

This was partly caused by a bad projector with very, very bad contrast in dark areas of the picture – everything was more or less dark gray or black on that projector. Not much fun to play like that.

On the other hand, I played the game like I would play a 3D shooter, expecting the usual smaller levels, lots of shooting and shallow gameplay. Of course this is totally the wrong approach to a game like Metroid: Prime. For 10 minutes, force yourself to think you are playing Super Metroid. Immerse yourself into the world – you have to force yourself for these 10 minutes. And of course, get a better projector.

Then it clicked.

This was a real Metroid. It felt like one and it played like one.

But then something more happened. Something that’s the reason why I don’t play either Prime or Echoes any more. And the reason is the most impressive thing a game could ever accomplish: I stopped playing out of plain fear. Plain and simple fear.

Fear of the bosses. Fear of the lights turning off and these awful chozo ghosts spawning. Fear of small, cramped rooms. Fear of darkness. And in Echoes it was even worse: Fear of being alone in the dark. Fear of dying alone on the dark side of the planet. Fear of being eaten alive by the darkness surrounding Samus (and actually hurting her).

Notice though: This is not the usual fear of losing an extra life by missing a jump and landing in a hole. It’s not the fear of running out of life energy. That’s plain old style video game fears.

No. Metroid is real. The fear is real. You see, both games have an incredibly well balanced learning curve. You practically can’t die. It can take you longer to accomplish something when you aren’t that good/precise, but you don’t die. At least I never did.

The atmosphere created by the games is what make it seem real. There’s that encyclopedia with an entry for every creature – even plant life – you encounter. Then there are no visible borders between levels. Sure, you zone between different places, but all is connected. Progress isn’t something allowing you to leave zones behind you. Progress is fluent. You go there, come back, go there again… The world feels real.

Samus is all alone in that big world, while there are still artifacts reminding of that old civilization. And there are real dangers in that world.

And the music works very, very well too. Light tunes, sometimes menacing, always fitting.

The graphics art too helps completing the illusion of reality. It’s not very detailed (it’s a GC game after all), but it fits. It creates a believable world.

All those little parts come together to create something I’ve never before seen in any game I have played. It brings emotions to a new level. The fear I had when playing Prime and Echoes was real. Real fear of the darkness. Of loneliness. And of drowning in that crashed space pirate ship in Prime – I know there is no limit on how long you can be submerged, but still, it felt so incredibly real.

In the end, it was too much for me.

I couldn’t get myself around to boot up the game any more – out of fear of dark areas or enemies jumping at me.

So what to say? Both GC Metroids are what I’d like to call the perfect game as they awaken real emotions. Something I never felt then using any other entertainment medium. Watching a movie feels like watching a movie. Reading a book is always reading a book. Playing Half Life (with much better graphics but much less credible atmosphere) is like playing a game. Even playing WoW is obviously playing a game.

But playing Metroid is living the game. It’s living the world created by these talented designers.

Unfortunately, even though they have created the perfect game, I’m unable to play it. The perfection put into the design made me too afraid to actually play the game.

Now, after around two years, I finally realized that. And I’m just plain impressed.

Do you know the games I was writing about? Did you feel the same? Do you know other games making you feel like that?

Mysql in Acrobat 8

I have Acrobat 8 running on my Mac. And look what I’ve found by accident:

I had console.log open to check something, when I found these lines:

<p>061115 9:57:48 [Warning] Can’t open and lock time zone table: Table ‘mysql.time_zone_leap_second’ doesn’t exist trying to live without them</p>

/Applications/Adobe Acrobat 8 Professional/Adobe Acrobat Professional.app/Contents/MacOS/mysqld: ready for connections.

Version: ‘4.1.18-standard’ socket: ‘/Users/pilif/Library/Caches/Acrobat/8.0_x86/Organizer70’ port: 0 MySQL Community Edition – Standard (GPL)

</tt>

MySQL shipped with Acrobat? Interesting.

The GPL-Version shipped with Acrobat? IMHO a clear license breach.

Of course, I peeked into the Acrobat bundle:

% pwd
/Applications/Adobe Acrobat 8 Professional/Adobe Acrobat Professional.app/Contents/MacOS
% dir mysql*
-rwxrwxr-x    1 pilif    admin     2260448 Feb 20  2006 mysqladmin
-rwxrwxr-x    1 pilif    admin     8879076 Feb 20  2006 mysqld

Interesting. Shouldn’t the commercial edition not print “Community Edition (GPL)”? Even if Adobe doesn’t violate the license (because they are just shipping the GPLed server and have bought the client library (which is GPL too) or written their own client), the GPL clearly states that I can get the sourcecode and a copy of the license. I couldn’t find these anywhere though…

I guess I should ask at mysql what’s going on here.

Bootcamp, Vista, EFI-Update

Near the end of october I wanted to install Vista on my Mac Pro, using Bootcamp of course. The reason is that I need a Windows machine at home to watch speedruns on it, so it seemed like a nice thing to try.

Back then, I was unable to even get setup going: Whenever you selected a partition that’s not the first partition on the drive (where OS X must be). The installer complained that the BIOS reported the selected partition to be non-bootable and that was it.

Yesterday, Apple has released another EFI update which was said to improve compatibility with Bootcamp and to fix some suspend/resume problems (I never had those)

Naturally, I went ahead and tried again.

The good news: Setup doesn’t complain any more. Vista can be installed to the second (or rather third) partition without complaining.

The bad news: The bootcamp driver installer doesn’t work. It always cancels out with some MSI-error, claims to roll back all changes (which it doesn’t – sound keeps working even after that «rollback» has occurred). This means: No driver support for NVIDIA card of my MacPro.

Even after trying to fetch a vista compliant driver from NVIDIA, I had no luck: The installer claimed the installation to be successful, but resolution stayed at 640x480x16 after a reboot. Device manager complained about the driver not finding certain resources to claim the device and that I was supposed to turn off other devices… whatever.

So in the MacPro case, I guess it’s waiting for updated Bootcamp drivers by Apple. I hear though that the other machines – those with an ATI driver are quite well supported.

All you have to do is to launch the bootcamp driver installer with the /a /v parameters to just extract the drivers and then you use the device manager and point it to that directory to manually install the drivers.

The pain of email SPAM

Lately, the SPAM problem got a lot worse in my email INBOX. Spammers seem to more and more check if their mail gets flagged by SpamAssasin and tweak the messages until they get through.

Due to some tricky aliasing going on on the mail server, I’m unable to properly use the bayes filter of SpamAssasin on our main mail server. You see, I have an infinite amount of addresses which are in the end delivered to the same account and all that aliasing can only be done after the message has passed SpamAssassin.

This means that even though mail may go to one and the same user in the end, it’s seen as mail for many different users by SpamAssassin.

This inability to use Bayes with SpamAssassin means that lately, SPAM has been getting through the filter.

So much SPAM that I began getting really, really annoyed.

I know that mail clients themselves also have bayes based SPAM filters, but I often check my email account with my mobile phone or on different computers, so I’m dependent on a solution that filters out the SPAM before it reaches my INBOX on the server.

The day before yesterday I had enough.

While all mail for all domains I’m managing is handled by a customized MySQL-Exim-Courier setting, mail to the @sensational.ch domain is relayed to another server and then delivered to our exchange server.

Even better: That final delivery step is done after all the aliasing steps (the catch-all aliases being the difficult part here) have completed. This means that I can in-fact have all mail to @sensational.ch pass through a bayes filter and the messages will all be filtered for the correct account.

This made me install dspam on the relay that transmits mail from our central server to the exchange server.

Even after only one day of training, I’m getting impressive results: DSPAM only touches mail that isn’t flagged as spam by SpamAssassin, which means that it’s carefully crafted to look “real”.

After one day of training, DSPAM usually detects junk messages and I’m down to one false negative every 10 junk messages (and no false positives).

Even after running SpamAssassin and thus filtering out the obvious suspects, a whopping 40% of emails I’m receiving are SPAM. So nearly half of the messages not already filtered out by SA are still SPAM.

If I take a look at the big picture, even when counting the various mails sent by various cron daemons as genuine email, I’m getting much more junk email than genuine email per day!

Yesterday, tuesday, for example, I got – including mails from cron jobs and backup copies of order confirmations for PopScan installations currently in public tests – 62 genuine emails and 252 junk mails of which 187 were caught by SpamAssassin and the rest was detected by DSPAM (with the exception of two mails that got through).

This is insane. I’m getting four times more spam than genuine messages! What the hell are these people thinking? With that volume of junk filling up our inboxes how ever could one of these “advertisers” think that somebody is both stupid enough to fall for such a message and intelligent enough to pick the one to fall for from all the others?

Anyways. This isn’t supposed to be a rant. It’s supposed to be a praise to DSPAM. Thanks guys! You rule!