Fun with a tablet pc

I laughed at them. Just like everyone did. I mean: Why on earth should I pay more to get less? Tablet PCs usually have a much too small monitor and are much too powerless – not to speak about resolution (I’m quite the screen resolution guy anway, considering that I’m seriously thinking about buying myself a T42p with the cool 1600×1200 resoulution, just because of that. But then: Have you ever really used Delphi? If yes, you know what I mean). And on top of that: Why on earth should I rely on handwriting recognition when everyone knows that this doesn’t work?

Then, I got a panel on my table to evaluate it’s potential as mobile device running our PopScan. While it’s not important what brand the thing actually was from (Acer in this case) and while I certainly did not have the opportunitiy to really test the thing like I did with my T40, one thing I’ve seen: Tablet PC’s are cool. Really cool.

For one there is that extremely powerful handwriting recognition engine. In contrast to all other engines I’ve seen, the one running on the tablets really works. Without training or getting used to on my part, I had a recognition rate of about 95% with the exceptions being some non-words anyway (like gnegg or “Sauklaue” which actually got recognized as Saddam [Sauklaue is what you call a really terrible handwriting in German]). The engine is so good that it actually can serve as a keyboard replacement – at least if you’re not writing too long texts (like this entry here ;-) )

But the real killer application of that thing is the included Microsoft Journal: A digital notepad (the name notepad was already taken for something… else… in Windows). You just make your notes, which goes very well using the pressure sensitive pen and because you can rest your hand on the display while writing – the tablet reacts to the pen only. Then, when you are done, you can draw a circle around the text you want to have recognized. Journal will do as you ask and replace your writing with a common text-box, leaving your drawings in place.

This is perfectly adapted for my workflow. I usually have a piece of paper lying on my desk, serving as container for all that small stuff I have to keep in mind. Line numbers, small concepts, interface definitions – quite a lot of stuff actually. Then, when the paper gets full, I usually throw it away and take a new one.

If I could do those notes on a Tablet PC, I could actually conserve them. But not only that, I could search for them – in full text (recognition is done in the background)! And it does not stop there: When I actually wrote down program code in those notes, I could immediatly reuse it, instead of manually retyping it

All this potential is realized with the really great UI the Journal has: You can insert space everywhere you want, pushing down the content below (and doing that quite intelligently), you can copy and paste your drwaings (sometimes I really whished I could do that on paper) and all that with a really simple UI. This is so incredibly great.

So to all those people laughing about the Tablet PC’s: Try them! Maybe you will be quite surprised. I for myself am quite sorry, I had to send the thing back.

Vendor lock-in

But, as Tom Kyte points out in his latest book, Effective Oracle by Design (Oracle Press), database dependence should be your real goal because you maximize your investment in that technology. If you make generic access to Oracle, whether through ODBC or Perl’s DBI library, you’ll miss out on features other databases don’t have. What’s more, optimizing queries is different in each database.

Needless to say on what vendors webpage I’ve seen the article the quote is coming from. One thing you learn in the practical live is that it’s extremely difficult to switch databases one you begin using the proprietary features. And you will have to switch. Sooner or later. Be it unsufficient functionality (as I’ve seen it with MySQL. I am still cursing the day when I began using SETs) or vendors going out of service or even political reasons.

While I certainly see some value in using proprietary features, let me tell you: Use them with care. Always be on the lookout for the availability of different approaches to do the same thing. If there are none, don’t do it (don’t use SETs in MySQL for example).

And if you can only get the full performance out of your RDBMS by relying on proprietary features, don’t use the RDBMS at all as it’s quite obviously not the right system. Performance must be available without being forced to use proprietary features. At least without relying on features in the query language itself – optimizations in the backend are ok for me.

This is one of the reasons I don’t use oracle, by the way. The other being this ;-)

Gentoo and Jabber

Already in 2002 I did my first experiments with jabber and I really liked what I saw when still reading the documentation. Setting up the server was a real pain, but eventually I got it working.

Then came the thing with our server and having in mind the hard work needed for setting up jabber, I deceided not to rebuild the jabber-configuration – even more so because aim-transport still does not support those fancy iChat-AIM-Accounts while Trillian does.

But today after having seen that iChat in Tiger is going to support jabber, I finally deceided that adding my beloved server back would be a cool thing…

And the whole adventure turned out to be another point where Gentoo shines above all other distributions: The ebuilds for jabber and the two transports I am using (AIM and ICQ) where already beautifully preconfigured. And not only that: They where current too (hint to debian… ;-) )

One thing did not work at the beginning: I could not register with the AIM-Transport. A quick glance at the configuration file of aim-t showed me that the preconfigured config file uses another port (5233) than the recommended settings in the main configuration file (5223).

All in all it took me about 10 minutes to get my old jabber installation back. With current versions of all the tools involved and without writing own startup scripts or other fancy stuff. This is one of the reasons I really like Gentoo

Oh… and in case you ask: My Jabber-ID is pilif@chat.sensational.ch. It’s not listed in the global user directory.

And if you’re asking what client I’m using: Though its interface may need some improvement, jajc is in my oppinion the best client you can get if you are using windows

Refactoring – It’s worth it

Just shortly after complaining about not having time to do some refactoring, I reached a place in my code where it was absolutely impossible to add feature x without cleaning up the mess I created three years ago. And – what’s even better: I had the time do really fix it. Cleanly

What I did was to sit down and recreate the whole module in a new Delphi project. I knew what features I wanted to have when finished and I somewhat knew the interface I had to comply to. The latter proofed inpractical, so I did some modifications to the interface itself (the thing was hacky too). Redoing the whole module took about a week (it’s about downloading stuff, exctracting and then XML-parsing it – everything in a thread while still providing feedback to the main thread), but it was absolutely worth it:

  • The code is clean. And with clean I mean so clean that adding further features will still be clean, depite not being needed as the new framework I’ve created is extremely powerful.
  • The thing is fast. Twelve times faster than the old version. I’m processing 7000 datasets in just 20 seconds now (including time needed for downloading and decompressing) which took me four minutes before.
  • The thing is more usable. Status reporting to the end user went from nearly nothing to everything the user may need. And she can now cancel the process – of course.

A task fully worth of undertaking. I’ve not been that pleased with my code for quite some time now

SonyEricsson, IMAP, Exchange

Since we switched to Exchange I’ve been unable to get my Email from my SonyEricsson-phones (first T610, then Z600 – talk about buying too many mobiles per time unit ;-). Every time I tried to connect, I immediatly got a Server not found

Today I’d had enough. This must be fixed, I told myself and set to fix it. And as the category for this enty is “Solutions”, I actually did solve it.

A quick check with netcat on the firewall (after turning off the port forwarding rules) revealed that it’s not actually a connection problem I was running into: The phone connected fine. So it must be something with Exchange…

The event log on the server revealed nothing at all. As always with Microsoft products. Those messages are either not there or completely ununderstandable.

Next I tried to set the server to maximum logging (Exchange-Manager, right click on your Server, Properties, Tab “Diagnostics Logging”, IMAP4Svc). The result were two entries in the event log: Client XXX connected, Client XXX disconnected. Extremely helpful. Nearly as helpful as the “Server not found” my cellphone was throwing at me (see note below).

I noticed that this isn’t getting me anywhere, so I went getting the cannon to shoot sparrows with: I’ve downloaded Ethereal and listened to the conversation my phone is having with exchange:

S: * OK Microsoft Exchange Server 2003 IMAP4rev1 server version x.x (xxx) ready.
C: A128 AUTH xxxx xxxx
S: A128 BAD Protocol Error: "Expected SPACE not found".

(I won’t ask, why the phone isn’t checking the capabilities first before logging in. This is not what I call a clean impementation)

Not very helpful either. At least for me, knowing the IMAP-RFC just enough to understand what the A128 stands for (it’s a transaction number which allows for asynchronous command execution. The server prefixes answers to commands with the number given by the client), but not much else. So I had to do something else: Logging in with Mozilla Thunderbird, where I had no problems. After one failed attempt where I forgot to turn off SSL (…), I got this:

S: * OK Microsoft Exchange Server 2003 IMAP4rev1 server version x.x (xxx) ready.
C: 1 capability
S: * CAPABILITY (...) AUTH=NTLM
S: 1 OK CAPABILITY completed.
C: 2 login "xxx\xx" "xxx"
S: 2 OK LOGIN completed.

(now that I’m reading through this (still without having read the RFC): Isn’t the server lying here: It just tells to be acceping NTLM-Auth, but Mozilla seems to ignore that and using AUTH=LOGIN to log in which the server accepts too. Enlighten me!)

Aha! We seem to be having quoting issues in the phone. Good. Even better: The issue seems to be that the phone does no quoting at all, which is fine because then we can do some quoting in the preferences-screen

After one failed attempt with two spaces after the username in the LOGIN-Line which was fixed by removing the somehow added trailing space in the phone’s username-field, I actually got it working. Yes. I’m reading my mail with the phone. It did work!

So, if you are having problems connecting to an Exchange-Server using SonyEricssons Phones, do the following:

  • Enter the username as "DOMAIN\username" (with quotes). Look that there are no spaces before the first and after the last quote.
  • Enter the password as "password". Include the quotes too and remove spaces that may linger aroung

In other words:

  • Escape -es with anoter one of them: -> \
  • Put username and Password in double quotes (“)

Dann klappt’s auch mit dem Nachbarn! (from a stupid german commercial. Forget it if you don’t understand it)

One final note: <rant>Everything would have been so much easier if only there were more usefil error messages involved. While I completly understand that the designers of the software don’t want to overwhelm their users and thus create seemingly simple messages, they should absolutely provide a “Details”-Link somewhere where the whole message can be read. Granted. Cellphones are limited, so in a way, I can accept the message I got there. What I can not accept is the way Exchange loggs the errors it occurs. Why on earth doesnt’ a protocol error getting logged when logging is set to “Maximum”?</rant>

Refactoring – If only I’d had time

Refactoring is a cool thing to do: You go back to the drawing-board and redesign some parts of your application so that they fit better to the new requirements building up over time. Sometime you take old code and restructure it, sometime you just rewrite the functionality in question (or even the whole application, but I don’t count this as refactoring any more)

Code always has the tendency to get messy over time as new requirements arise and must be implemented on the basis of existing code. Not even the most brilliant design can save your code. It’s impossible to know what you are going to do in the future with your code.

Let’s say you have an application that is about orders. Orders with ordersets that somehow get greated and then processed. Now let’s say you create quite an usable model of your order and ordersets. Very well. It’s nice, it’s clean and it works.

And now comes the customer and over the years new features are added, let’s call it an inventory mode. You notice that these new inventory records have quite a lot in common with your orders, so you reuse them, but add some features.

Now full stop! It already happened. Why on earth are you reusing the old code and “just adding features”? That’s not the way to go. The correct solution would be to abstract away the common parts of your order and inventory records to something like TProductContainer (using Delphi naming conventions here) which has two descendants TOrder and TInventoryRecord.

But this comes at a cost: It requires time. It requires quite some steps:

  1. Think of a useful abstraction (just naming it is not easy. My TProductContainer above is stupid).
  2. Create the Interface
  3. Implement the new subclasses
  4. Change the application where appropriate (and if it’s just changing declarations, it still sucks as it’s time consuming)
  5. Test the whole thing

Now try to convince the project-manager or even your customer that implementing the required feature can be done in x days, but you’d like to do it in x*2 days because that would be cleaner. The answer would be another question like: “If you do it in x days, will it work?”. You’ll have to answer “yes”, in the end. So you will be asked “if you do it in x*2 days, will it work better than in x days?” and you’d have to answer “No” as the whole sense in cleaning up messy code is to keep it running just the same.

So, in the end those things will accumulate until it cannot be put away any longer and the refactoring has to be done no matter what, just because implementing the feature uses x days plus y days just for understanding the mess you have created over time. y being 2x or so.

The mean thing is: The longer you wait doing the inevitable, the longer you will have to fix it, so in the end, it should always be the x*2 way – if only those noneducated people would understand.

PHP scales well

I think PHP scales well because Apache scales well because the Web scales well. PHP doesn’t try to reinvent the wheel; it simply tries to fit into the existing paradigm, and this is the beauty of it.

Read on shiflett.org after a small pointer by Slashdot into the right direction. This guy really knows what he is writing – or at least it seems to me as I think exactly the same way as he does (which is a somewhat arrogant way of saying things, I suppose :-)).

Read by the PostgreSQL team

Seing this in my referrer-log and seing that Robert commenting here is in the PostgreSQL-Team too, I come to the conclusion that someone of the Postgres-Team with obviously enough influence to propose links to the weekly newsletter seems to be reading my humble blog.

Thank you for mentioning my posting in your weekly news. That was very kind.

WinInet, Proxies and NTLM

For quite some time now I heard about customers telling me that PopScan seems to be having problems with proxy servers using NTLM authentication. I knew that and I told everyone that this is not supported.

But I could not understand it: Why did it not work. I mean, I went from my own HTTP-Routines to WinInet just to be able to use the system-wide proxy server settings and connections

When using WinInet and INTERNET_OPEN_TYPE_PRECONFIG with InternetOpen, the whole thing is supposed to just work – as long as IE itself does work. But in my application this wasn’t the case and I had no idea why. As soon as NTLM was enabled at the proxy, I was just getting a 407 HTTP_PROXY_AUTHENTICATION_REQUIRED status from the proxy, despite the correct password being used

MSDN was of help (taken from the documentation of InternetOpenRequest):

If authentication is required, the INTERNET_FLAG_KEEP_CONNECTION flag should be used in the call to HttpOpenRequest. The INTERNET_FLAG_KEEP_CONNECTION flag is required for NTLM and other types of authentication in order to maintain the connection while completing the authentication process

I’ve added this flag (and some more – now that I already was at it), recompiled, tested and -yes- finally it does what it should: It works just out of the box. No more 407, no more entering password for the users. One more thing that switched its state from “not supported” to “supported and working splendidly”.

This is with a NTLM-enabled Squid Proxy, but it should work with Microsoft ISA too.