Windows 2008 / NAT / Direct connections

Yesterday I ran into an interesting problem with Windows 2008’s implementation of NAT (don’t ask – this was the best solution – I certainly don’t recommend using Windows for this purpose).

Whenever I enabled the NAT service, I was unable to reliably connect to the machine via remote desktop or even any other service that machine was offering. Packets sent to the machine were dropped as if a firewall was in between, but it wasn’t and the Windows firewall was configured to allow remote desktop connections.

Strangely, sometimes and from some hosts I was able to make a connection, but not consistently.

After some digging, this turned out to be a problem with the interface metrics and the server tried to respond over the interface with the private address that wasn’t routed.

So if you are in the same boat, configure the interface metrics of both interfaces manually. Set the metric of the private interface to a high value and the metrics of the public (routed) one to a low value.

At least for me, this instantly fixed the problem.

VMWare Fusion Speed

This may be totally placebo, but I noticed that using Vista inside a VMWare Fusion VM has just turned from nearly unbearable slow to actually quite fast by updating from 2.0 Beta 2 to 2.0 Final.

It may very well be that the beta versions contained additional logging and/or debug code which was keeping the VM from reaching its fullest potential.

So if you are too lazy to upgrade and still running one of the Beta versions, you should consider updating. For me at least, it really brought a nice speed-up.

This tram runs Microsoft® Windows® XP™

Image of the new station information system in some of Zurich's tramways with a Windows GPF on top of the display

The trams here in Zürich recently were upgraded with a really useful system providing an overview over the next couple of stations and the times when they will be reached.

Today, I managed to grab this picture which once again shows clearly why windows maybe isn’t the right platform for something like this. Also, have a look at the amount of applications in the taskbar (I know, the picture is bad, but that’s all I can get out of my mobile phone)…

If I was tasked with implementing something like this, I’d probably use Linux in the backend and a webbrowser as s frontend. That way it’s easier to debug, more robust and less embarassing if it blows up.

Windows Installer – Worked around

I’ve talked about Windows Installer (the tool that parses these .MSI files) before and I’ve never really convinced that this technology really does its job. Just have a look at these previous articles: Why o why is my hard-drive so small?, A look at Windows Installer and The myth of XCOPY deployment

Yesterday I had a look at the Delphi 2007 installation process and it dawned me that I’m going to have to write yet another blog entry.

It’s my gut-feeling that 80% of all bigger software packages in Windows can’t live with MSIs default feature set and they have to work around inherent flaws in the design of that tool. Here’s what I found installers doing (in increasing order of stupidity):

  1. Use a .EXE-stub to install the MSI engine. These days this really doesn’t make sense any more as 99% of all windows installation already have MSI installed and the ones that don’t, you don’t want to support anyways (Windows Update requires MSI).
  2. Use a .EXE-stub that checks for availability and thereafter installs a bunch of prerequisites – sometimes even other MSI packages. This isn’t caused by MSI-files unable to detect the presence of prerequisites – it’s because MSI-files are unable to install other MSI files and the workaround (using merge packages) doesn’t work because most of the third party libraries to install don’t come as merge packages.
  3. Create a MSI-file which contains a traditional .EXE-Setup, unpack that to a temporary location and run it. This is what I call the “I want a Windows-Logo, but have no clue how to author MSI files”-type of installation (and I completely understand the motivation behind that) which just defeats all the purposes MSI files ever had. Still: Due to inherent limitations in the MSI engine, this is often times the only way to go.
  4. Create MSI-files that extract a vendor specific DLL, a setup script and all files to deploy (or even just an archive) and then use that vendor specific DLL to run the install script. This is what InstallShield does at least some of the time. This is another version of the “I have no clue how to author a MSI file”-installation with the additional “benefit” of being totally vendor-locked.
  5. Create a custom installer that installs all files and registry keys and then launch the windows installer with a temporary .MSI-file to register your installation work in the MSI-installer. This is what Delphi 2007 does. I feel this is another workaround for Microsoft’s policy that only MSI-driven software can get a windows-logo, but this time it’s vendor-locked and totally unnecessary and I’m not even sure if such a behavior is consistent with any kind of specification.

Only a small minority of installations really use pure MSI and these installations usually are installations of small software packages and as my previous articles show: The technology is far from fool-proof. While I see that Windows should provide a generalized means for driving software installations, MSI can’t be the solution as evidenced by the majority of packages using workarounds to get by the inherent flaws of the technology.


PT-AE1000 HDMI woes

Today was the day when I got the crown jewel of my home entertainment system: A Panasonic PT-AE1000

The device is capable of displaying the 1920×1080 resolution which means that it’s capable of showing 1080p content (at 50,60 and even 24 Hertz). It’s the thing that was needed to complete my home entertainment setup.

The projector is quite large but not that heavy. I also like the motorized lens controls for zoom and focus and I love the incredible lens shift range: You can basically move the picture the whole size of it in any direction. This allowed me not to tilt the device even though it’s mounted quite high up on the ceiling. No tilt means no keystone distortion.

Even though all projectors provide you with some means to correct the keystone effect, but you’ll automatically lose picture quality and content when using it, so it’s best to leave it off.

Unfortunately, the device has one flaw: It reports totally wrong screen resolutions via DCC when you connect the projector via DVI (or HDMI, but that’s the same thing).

It tells windows (strangely enough, it works on Mac OS X) that it supports the resolution of 1920×540 at some strange refresh rate of around 54 Hz.

The intel chipset of my Mac Mini can’t output this resolution so it falls back to 480p and there’s no possiblity of changing this.

With the help of PowerStrip (which you won’t even need when you are reading this), I created a corrected Monitor .INF-File that has the correct resolution and acceptable refresh rates in it (taken from the projectors manual).

Once you tell windows to update the driver of your monitor and point it to this file specifically, it will allow you to set the correct resolution.

*phew* – problem solved.

Aside of this glitch, so far, I love the projector. Very silent, very nice picture quality, perfect colors and it even looks quite acceptable with its black casing. This is the projector I’m going to keep for many years as there’s no increase of resolution in sight for a very long time.

Strange ideas gone wrong

Screenshot of three buttons: OK - Cancel - Apply

The apply button Windows brought to us with its windows 95 release is a strange beast.

Nearly all people I know (myself included) misuse the button.

Ask yourself: When you see the three buttons as shown on the screenshot and you want the changes you made in the dialog to take effect, what button(s) do you hit?

Chances are that you press “Apply” and then “OK”.

Which obviously is wrong.

The meaning of the buttons is as follows: “Apply” applies the changes you made, but leaves the dialog open. “Cancel” throws the changes away and closes the dialog. “OK” applies the changes and closes the dialog.

So in a situation like the above, hitting OK would suffice.

I see no real reason why the apply button is there and personally, I don’t understand why people insist on hitting it. Mind you, this also affects “educated” people: I perfectly well know how the buttons work and I’m still pressing Apply when it’s not needed.

Actually, Apply is a dangerous option set out to defeat the purpose of the Cancel-Button: Many times, I catch myself making changes and hitting “Apply” after every modification I made in the dialog, thus rendering the cancel button useless because I’m constantly applying the changes so Cancel usually will do nothing.

Why is the Apply button there then?

It’s to provide the user with feedback of her changes without forcing her to reopen the dialog.

Say you want to reconfigure the looks of your desktop. At first you change the font. Then you hit apply and you watch if you like the changes. If yes, you can now change the background and hit apply again. If not, you can manually change the font back.

Problem is that nobody uses the buttons that way and I personally have no idea why. Is it an emotional thing? Do you feel that you have to hit Apply and OK to really make it stick? I have no idea.

Personally, I prefer the Mac way of doing things: Changes you make are immediately applied, but there’s (often) a way to reset all the changes you made when you initially opened the dialog. This combines the feature of immediate response with a clean, safe way to go back to square one.

My question to you is: Do you catch yourself too doing that pointless Apply-OK-sequence? Or is it just me, many people in screencasts, my parents and many customers doing it wrongly?