Newfound respect for JavaScript

Around the year 1999 I began writing my own JavaScript code as opposed to copying and pasting it from other sources and only marginally modifying it.

In 2004 I practically discovered AJAX (XmlHttpRequest in particular) just before the hype started and I have been doing more and more JavaScript since then.

I always regarded JavaScript as something you have to do, but which you dislike. My code was dirty, mainly because I was of the wrong opinion that JavaScript was a procedural language with just one namespace (the global one). Also, I wasn’t using JavaScript for a lot of functionality of my sites, partly because of old browsers and partly because I have not yet seen what was possible in that language.

But for the last year or so, I’m writing very large quanitites of JS in very AJAXy applications, which made me really angry about the limited ways you could use to structure your code.

And then I found a link on reddit to a lecture of a yahoo employee, Douglas Crockford, which really managed to open my eyes.

JavaScript isn’t procedural with some object oriented stuff bolted on. JavaScript is a functional language with object oriented and procedural concepts integrated where it makes sense for us developers to both quickly write code and to understand written code even with only a very little knowledge of how functional languages work.

The immensely powerful concept of having functions as first class objects, of allowing closures and of allowing to modify object prototypes at will makes turns JS into a really interesting language which can be used to write “real” programs with a clean structure.

The day when I have seen those videos, I understood that I had the completely wrong ideas about JavaScript mainly because of my crappy learning experience so far which initially consisted of Copying and Pasting crappy code from the web and later of reading library references, but always ignoring real introductions to the language («because I know that already»).

If you are interested to learn a completely new, powerful side of JavaScript, I highly recommend you watch these movies.

A followup to MSI

My last post about MSI generated some nice responses, amongst them the lengthy blog post on Legalize Adulthood.

Judging from the two track-backs on the MSI posting and especially after reading the linked post above, I come to the conclusion that my posting was very easy to misunderstand.

I agree that the workarounds I listed are problems with the authoring. I DO think however that all these workarounds where put in place because the platform provided by Microsoft is lacking in some kind.

My rant was not about the side effects of these workarounds. It was about their sole existence. Why are some of us forced to apply workarounds to an existing platform to achieve their goals? Why doesn’t the platform itself provide the essential features that would make the workarounds unneeded?

For my *real* problems with MSI from an end users perspective, feel free to read this rant or this on e (but bear in mind that both are a bit oldish by now).

Let’s go once again through my points and try to understand what each workaround tries to accomplish:

  1. EXE-Stub to install MSI: MSI, despite being the platform of choice still isn’t as widely deployed as the installer authors want it to be. If Microsoft wants us to use MSI, it’s IMHO their responsibility to ensure that the platform is actually available.

    I do agree though that Microsoft is working on this, for example by requiring MSI 3.1 (the first release with acceptable patching functionality) for Windows Update. This is what makes the stubs useless over time.

    And personally I think a machine that isn’t using Windows Update and thus hasn’t 3.1 on it isn’t a machine I’d want to deploy my software on because a machine not running Windows update is probably badly compromised and in an unsupportable state.

  2. EXE-Stub to check prerequisites: Once more I don’t get why the underlying platform cannot provide functionality that is obviously needed by the community. Prerequisites are a fact for life and MSI does nothing to help that. MSI packages can’t be used to install other MSI packages but Merge Modules, but barely any libraries required by todays applications actually come in MSM format (.NET framework? Anyone?).

    In response to the excellent post on Legalize Adulthood which gives an example about DirectX, I counter with: Why is there a DirectX Setup API? Why are there separate CAB files? Isn’t MSI supposed to handle that? Why do I have to create a setup stub calling a third-party API to get stuff installed that isn’t installed in the default MSI installation?.

    An useful package solution would provide a way to specify dependencies or at least allow for automated installation of dependencies from the original package.

    It’s ironic that an MSI package can – even though it’s dirty – use a CustomAction to install a traditionally packaged .EXE-Installer-Dependency, but can’t install a .MSI packaged dependency.

    So my problem isn’t with bootstrappers as such, but with the limitations in MSI itself requiring us developers to create bootstrappers to do work which IMHO MSI should be able to do.

  3. MSI-packages .EXE’s: I wasn’t saying that MSI is to blame for the authors that repacked their .EXE’s into .MSI packages. I’m just saying that this is another type of workaround that could have been chosen for the purpose of getting the installation to work despite (maybe only perceived) limitations in MSI. An ideal packaging solution would be as accessible and flexibly as your common .EXE-installer and thus make such a workaround unneeded.

  4. Third party scripting: In retrospect I think the motivation for these third party scripting solutions is mainly the vendor-lock-in. I’m still convinced though that with a more traditional structure and a bit more flexibility for the installer authors, such third party solutions would get more and more unneeded until they finally die out.

  5. Extracting, then merging: Also just another workaround that has been chosen because a distinct problem wasn’t solvable using native MSI technology.

    I certainly don’t blame MSI for a developer screwing up. I’m blaming MSI for not providing the tools necessary for the installer community to use native MSI to solve the majority of problems. I ALSO blame MSI for messiness, for screwing up my system countless times and for screwing up my parent’s system which is plainly unforgivable.

    Because MSI is a complicated black box, I’m unable to fix problems with constantly appearing installation prompts, with unremovable entries in “Add/Remove programs” and with installations failing with such useful error messages as “Unknown Error 0x[whatever]. Installation terminated”.

    I’m blaming MSI for not stopping the developer community to author packages with above problems. I’m blaming MSI for its inherent complexity causing developers to screw up.

    I’m disappointed with MSI because it works in a ways that requires at least a part of the community to create messy workarounds for quite common problems MSI can’t solve.

    What I posted was a list of workarounds of varying stupidity for problems that shouldn’t exist. Authoring errors that shouldn’t need to happen.

    I’m not picky here: A large majority of packages I had to work with do in fact employ one of these workarounds (the unneeded EXE-stub being the most common one), none of which should be needed.

    And don’t get me started about how other operating systems do their deployment. I think Windows could learn from some of them, but that’s for another day.

Altering the terminal title bar in Mac OS X

After one year of owning a MacBook Pro, I finally got around to fix my precmd() ZSH-hack to really make the current directory and stuff appear in the title bar of Terminal.app and iTerm.app.

This is the code to add to your .zshrc:

case $TERM in
    *xterm*|ansi)
		function settab { print -Pn "e]1;%n@%m: %~a" }
		function settitle { print -Pn "e]2;%n@%m: %~a" }
		function chpwd { settab;settitle }
		settab;settitle
        ;;
esac

settab sets the tab contents in iTerm and settitle does the same thing for the title bar both in Terminal.app and iTerm.

The sample also shows the variables ZSH replaces in the strings (the parameter -P to print lets ZSH do prompt expansion. See zshmisc(1) for a list of all variables): %n is the currently logged on user, %m the hostname up until the first dot and %~ is displaying the current directory or ~ if you are in $HOME. You can certainly add any other environment variable of your choice if you need more options, but this more or less does it for me.

Usually, the guides in the internet make you use precmd to set the title bar, but somehow, Terminal wasn’t pleased with that method and constantly kept overwriting the title with the default string.

And this is how it looks in both iTerm (above) and Terminal (below):

Windows Installer – Worked around

I’ve talked about Windows Installer (the tool that parses these .MSI files) before and I’ve never really convinced that this technology really does its job. Just have a look at these previous articles: Why o why is my hard-drive so small?, A look at Windows Installer and The myth of XCOPY deployment

Yesterday I had a look at the Delphi 2007 installation process and it dawned me that I’m going to have to write yet another blog entry.

It’s my gut-feeling that 80% of all bigger software packages in Windows can’t live with MSIs default feature set and they have to work around inherent flaws in the design of that tool. Here’s what I found installers doing (in increasing order of stupidity):

  1. Use a .EXE-stub to install the MSI engine. These days this really doesn’t make sense any more as 99% of all windows installation already have MSI installed and the ones that don’t, you don’t want to support anyways (Windows Update requires MSI).
  2. Use a .EXE-stub that checks for availability and thereafter installs a bunch of prerequisites – sometimes even other MSI packages. This isn’t caused by MSI-files unable to detect the presence of prerequisites – it’s because MSI-files are unable to install other MSI files and the workaround (using merge packages) doesn’t work because most of the third party libraries to install don’t come as merge packages.
  3. Create a MSI-file which contains a traditional .EXE-Setup, unpack that to a temporary location and run it. This is what I call the “I want a Windows-Logo, but have no clue how to author MSI files”-type of installation (and I completely understand the motivation behind that) which just defeats all the purposes MSI files ever had. Still: Due to inherent limitations in the MSI engine, this is often times the only way to go.
  4. Create MSI-files that extract a vendor specific DLL, a setup script and all files to deploy (or even just an archive) and then use that vendor specific DLL to run the install script. This is what InstallShield does at least some of the time. This is another version of the “I have no clue how to author a MSI file”-installation with the additional “benefit” of being totally vendor-locked.
  5. Create a custom installer that installs all files and registry keys and then launch the windows installer with a temporary .MSI-file to register your installation work in the MSI-installer. This is what Delphi 2007 does. I feel this is another workaround for Microsoft’s policy that only MSI-driven software can get a windows-logo, but this time it’s vendor-locked and totally unnecessary and I’m not even sure if such a behavior is consistent with any kind of specification.

Only a small minority of installations really use pure MSI and these installations usually are installations of small software packages and as my previous articles show: The technology is far from fool-proof. While I see that Windows should provide a generalized means for driving software installations, MSI can’t be the solution as evidenced by the majority of packages using workarounds to get by the inherent flaws of the technology.

*sigh*

Software patents

Like most programmers, I too hate software patents. But until now, I’ve never had a fine example of how bad they really are (though I’ve written about intellectual property in general before).

But now I just found another granted patent application linked on reddit.

The patent covers… linked lists.

Granted. It’s linked lists with pointers to objects further down the list than the immediate neighbors, but it’s still a linked list.

I’ve first read about linked lists when I was 13 and I read my first book about C. This was 13 years ago – way before that patent application was originally filed.

So seeing a technology in use for at least 13 years being patented as «new invention», I’m asking myself two questions:

  1. How the hell could this patent application even be accepted seeing that it isn’t inventive at all?
  2. Why do companies file trivial patents for which prior art obviously exists and which are thus invalid to begin with?

And based on that I’m asking the world: Why don’t we stop the madness?

But let’s have a look at the above two points. Answering the first one is easy: The people checking these applications have no interest (and no obligation) to check the applied patents. In fact, these «experts» may even be paid per passed patent and thus are totally interested in letting as many patents pass as possible. Personally, I also doubt their technical knowledge in the fields they are reviewing patents in.

Even more so: Most of these applications are formulated in legal-speak which is targeted to be read by lawyers which usually have no clue about IT, whereas the IT people usually don’t understand the texts of the applications.

Patent law (as trademark law) basically allows you to submit anything and it’s the submitters responsibility to make sure that prior art doesn’t exist. The patent offices can’t be hold liable for wrongly issued patents.

And this leads us to question 2: Why submit an obviously invalid patent?

For one, patent applications make the scientific achievement of a company measurable for non-tech people.

Analysts compare the «inventiveness» of companies by comparing the sheer number of granted patents. A company with more granted patents has a better value in the market and it’s only about market-value these days. This is one big motivation for a company to try and have as many patents granted as possible.

The other issue is that once the patent is granted, you can use that (invalid) patent to sue as many competitors as possible. As you have the legally granted patent on your side, the sued party must prove that the patent is invalid. This means a long and very expensive trial with an uncertain outcome – you can never know if the jury/judge in question knows enough about technology to identify the patent as false or if they will just value the legally issued document higher than the possible doubts raised by the sued party.

This makes fighting an invalid patent a very risky adventure which many companies don’t want to invest money in.

So in many (if not most) cases, your invalid patent is as valuable as a valid one if you intend to use it to sue competitors to make them pay royalty fees or hinder them at ever selling a product competing to yours – even though your legal measure is invalid.

One more question to ask: Why does the Free Software community seem so incredibly concerned about software patents while vendors of commercial software usually keep quiet?

It’s all about the provability of infringing upon trivial patents.

Let’s take above linked-list patent: It’s virtually impossible to prove that any piece of compiled software is infringing on this (invalid) patent. In source form though, it’s trivially easy to prove the same thing.

So where this patent serves only one purpose in the closed source world (increased shareholder value due to higher amount of patents granted), it also begins to serve the other purpose (weapon against competitors) in a closed source world.

And. Yes. I’m asserting that Free as well as Non-Free software infringes upon countless of patents. Either willing or unwilling (I guess the former is limited to the non-free community). Just look at the sheer amount of software patents granted! I’m asserting that it’s plain impossible to write software today that doesn’t infringe upon any patent.

Please, stop that software patent nonsense. The current system criminalizes developers and serves no purpose that trademark and intellectual property laws couldn’t solve.

Wii in a home cinema

The day before yesterday I was lucky enough to get myself a Wii.

It was and basically still is impossible to get one here in Switzerland since the launch on December 8th. So I was very happy that I got the last device of a delivery of like 15 pieces to a game shop near where I work.

Unfortunately, my out-of-the-box experience with the Wii was quite poor which is why I didn’t write the review yesterday – I wanted to spend a bit more time with the console before writing something bad about it.

Here’s my story:

I’m using a projector, a receiver and a big screen – a real home cinema.

This means that the Wii is usually placed quite far away from either the screen or from the receiver (and especially from the projector about 25 meters in my case). This also means that I get into large issues with the relatively short cable with which you are supposed to connect the sensor bar to the Wii.

And the short A/V-cable didn’t help either, so I also couldn’t just place the Wii near the screen because then I wouldn’t be able to connect it to the receiver.

I ended up placing the Wii more or less in the middle of the room and while I like the looks of the console, it still doesn’t fit the clean look of the rest of my home cinema.

It gets worse though: I placed the sensor bar on the top of my center speaker right below the screen. It turned out though that this placement was too far below my usual line of sight so that the Wiimote wasn’t able to pick the signal up.

So currently, I have placed the sensor bar on top of an awful looking brown box right on the middle of my table – a setup I have to rebuild whenever I want to play and to put away when I’m not playing.

I SO want that wireless sensor bar to place it on the top of my screen.

But the not-quite-working goes on: At first I wasn’t able to connect to my WLAN. The Wii just didn’t find the network. Flashing the ZyXEL AP with a newer software helped there and the Wii recognized the network, but was unable to get an IP address.

Due to the awkward placement it was unable to get a strong signal.

I moved the device more to the middle of the room (making it even more visible to the casual eye) and it was finally able to connect.

My first visit to the shopping channel ended up with the whole console crashing hard. Not even the power button worked – I had to unplug and replug it at which time I had enough and just played Zelda (a review of that jewel will probably follow).

Yesterday I was luckier with the shopping channel (I didn’t buy anything though) and as I had my terrible “sensor bar on a box” configuration already up and running, I got a glimpse of what the Wii out-of-the-box-experience could be: Smootly working, good-looking and a very nice user control interface – using the Wiimote to point at the screen feels so … natural.

In my opinion, Nintendo did an awful mistake of forcing that cable on the sensor bar. As we know by now, the bar contains nothing more than two IR-LEDs. The cable is only for powering them. Imagine the sensor bar being another BT device – maybe mains-powered or battery-powered otherwise (though these IR-LEDs suck power like mad). Imagine the console being able to turn it on and off wirelessly.

The whole thing would not have been that much more expensive (alternatively, they could sell it as an addon) but it would allow the same awesome out-of-the-box experience for all users – even the one with a real home entertainment system.

If it wasn’t Nintendo (I admit that I am a «fanboi» in matters of Nintendo – the conditioning I got with the NES in my childhood still hasn’t worn off), I would have been so incredibly pissed at that first evening that I would have returned the whole console and written one bad review here – even the XBox 360 worked better than the Wii… *sigh*

And all that to save a couple of hours in the engineering department.

External blogging tools

Ever since I started blogging, I have been using different tools to help me do my thing.

At first, I was using the browser to directly write the articles in the MT interface, but after losing a significant amount of text that way, I quickly migrated to writing my entries in a text editor (jEdit back then) and pasting them into the MT interface.

Then I learned about the XML-RPC interface to MT and began using w.bloggar to do my writing, but stagnation and little quirks made me to back to a real text editor which is what I was using for a long time (though I migrated from MT to s9y in between).

Last year, I caught the buzz about Windows Live Writer, which was kind of nice, but generally, I need more freedom in writing HTML than what a WYSIWYG editor can provide me with – especially as I have my special CSS rules for code for example that I prefer to just manually setting the font.

So I was back to the text editor (which went from jEdit to TextMate in between).

And then I noticed the blogging bundle.

The blogging bundle for TextMate allows me to keep writing my blog entries in the tool of choice, while being able to post the finished entries direct from within TextMate.

Basically, you configure some basic settings for your blogs and then you write a colon-separated list of values at the beginning of the document which TextMate uses to post your entry.

It can fetch categories, configure pings and comments, set tags – whatever you want. Directly from your editor where you are doing your writing. Of course you can also fetch older postings and edit them.

So this provides me with the best of both worlds: Direct posting to the blog with one key press (Ctrl-Command-P) while writing in the editor of my choice that is very stable and provides me with the maximum flexibility at laying out my articles.

I love it.

PT-AE1000 HDMI woes

Today was the day when I got the crown jewel of my home entertainment system: A Panasonic PT-AE1000

The device is capable of displaying the 1920×1080 resolution which means that it’s capable of showing 1080p content (at 50,60 and even 24 Hertz). It’s the thing that was needed to complete my home entertainment setup.

The projector is quite large but not that heavy. I also like the motorized lens controls for zoom and focus and I love the incredible lens shift range: You can basically move the picture the whole size of it in any direction. This allowed me not to tilt the device even though it’s mounted quite high up on the ceiling. No tilt means no keystone distortion.

Even though all projectors provide you with some means to correct the keystone effect, but you’ll automatically lose picture quality and content when using it, so it’s best to leave it off.

Unfortunately, the device has one flaw: It reports totally wrong screen resolutions via DCC when you connect the projector via DVI (or HDMI, but that’s the same thing).

It tells windows (strangely enough, it works on Mac OS X) that it supports the resolution of 1920×540 at some strange refresh rate of around 54 Hz.

The intel chipset of my Mac Mini can’t output this resolution so it falls back to 480p and there’s no possiblity of changing this.

With the help of PowerStrip (which you won’t even need when you are reading this), I created a corrected Monitor .INF-File that has the correct resolution and acceptable refresh rates in it (taken from the projectors manual).

Once you tell windows to update the driver of your monitor and point it to this file specifically, it will allow you to set the correct resolution.

*phew* – problem solved.

Aside of this glitch, so far, I love the projector. Very silent, very nice picture quality, perfect colors and it even looks quite acceptable with its black casing. This is the projector I’m going to keep for many years as there’s no increase of resolution in sight for a very long time.

Vista preloaded

Today I had the dubious “pleasure” of setting up a Lenovo Thinkpad R60 with Vista Business Edition preloaded.

We just needed to have a clean Vista machine to test components of our PopScan solution on and I just didn’t have the disk space needed for yet another virtual machine.

I must say that I didn’t look forward to the process. Mainly because I hated the OEM installation process under XP. Basically, you got an installation cluttered with “free” “feature enhancments” which usually were really bad-looking if provided from the hardware manufacturer or nagged the hell out of you if it were trial releases of some anti virus program or something else.

Ever since I’m setting up windows machines for personal use, my policy has been to wipe the things clean and install a clean windows copy on them.

With this background and the knowledge that just for testing purposes the out-of-the-box installation would do the trick, I turned on that R60 machine.

The whole initial setup process was very pleasant: It was just the usual Windows Setup minus the whole copying of files process – the installation started with asking me what language and what regional settings to use and it actually guessed the keyboard settings right after setting the location (a first! Not even apple can do that *sigh*).

Then came the performance testing process as we know it from non-oem-preinstalled installations.

Then it asked me for username and provided a selection of background images.

I really, really liked that because usually the vendor provided images are just crap.

The selection list even contained some Vista-native images and some Lenovo images – clearly separated.

The last question was a small list of “additional value-add products” with “No thank you” preselected.

You can’t imagine how pleased I was.

up until what came after.

The system rebooted and presented me with a login screen to which I gave the credentials I provided during the setup process.

Then the screen turned black and a DOS command prompt opened. And a second, though minimized.

The first two lines in that DOS prompt were

echo "Please wait"
Please wait

I can understand that Lenovo wanted to get their machines out and that they may be willing to sacrifice a bit of Vista’s shinyness. But they obviously even lack the basic batch-knowledge of using “@echo off” as the first command in their setup script thus ruining the unpleasantness of the installation even more.

But wait… it’s getting worse…

The script ran and due to echo being on displayed the horrors to me: ZIP-File after ZIP-File was unpacked into the Application Data folder of the new user. MSI-File after MSI-File was installed. All without meaningful progress report (to a non-techie that is).

Then some Lenovo registration assistant popped up asking me all kinds of personal questions with no way to skip it, but the worst thing about it was the font it used: MS Sans Serif – without any font smoothing. This looked like Windows 98, removing the last bit of WOW from Vista ( :-) ).

Then it nagged me about buying Norton Internet Security.

And finally it let me to the desktop.

And… oh the horror:

  • My earlier choice of background image was ignored. I was seeing a Lenovo-Logo all over the place.
  • On the screen was a Vista-Builtin-Assistant telling me to update the Windows Defender signatures. It looked awful. Jaggyness all over the place: Clear Type was clearly off and the default font of windows looks aful without ClearType.
  • It’s impossible for a non-techie to fix that ClearType thing as it’s buried deep in the Control Panel – it’s supposed to be on and never to be touched by normal users.
  • On the Notification Area were three icons telling me about WLAN connectivity: Windows’ own, the Think Pad driver’s and the one of the ThinkVantage Network Access tool (the last one has a bug, btw, it constantly keeps popping up a balloon telling me that it’s connected. If I close it, it reopens 30 seconds later).

I didn’t do anything to fix this, but quickly joined the machine to the domain in the hope that logging in to that would give me the Vista default profile.

But no: Another MSI-installer and still no ClearType

It’s a shame to see how the OEMs completely destroy everything Microsoft puts into making their OSes look and feel “polished”. Whatever they do, the OEMs succeed at screwing up the installations.

This is precisely where Apple outshines Windows by far. If you buy a computer by apple, you will have software on it that was put there by Apple, made by Apple, running on an OS made by Apple. Everything is shiny and works out of the box.

Microsoft will never be able to provide that experience to their users as long as OEMs think they can just throw in some crappy made installation tools that destroy all the good experience a new user could have with the system. From scary DOS prompts over crappy (and no longer needed) third party applications to completely crappy preconfiguration (I could *maybe* let that ClearType thingie pass IF they’d chosen a system font that was actually readable with ClearType off – this looked worse than a Linux distribution with an unpatched Freetype).

PC OEMs put no love at all into their products.

Just sticking a Windows Vista sticker on it isn’t brining that “WOW” to the customers at all.

Microsoft should go after the OEMs and force them to provide clean installations with only a minimal amount of customization done.

HD-DVD unlocked

Earlier, it was possible to work around the AACS copy protection scheme in use for HD-DVD and Blueray on a disc-to-disc basis.

Now it’s possible to work around it for every disk.

So once more we are in the situation where the illegal media pirate is getting a superior user experience than the legal user: The “pirate” can download the movie to watch on-demand. He can store it on any storage medium he pleases (like home servers, NASes or optical discs). He can reformat the content to another format a particular output medium requires (like an iPod) without having to buy another copy. And finally, he is capable to watch the stolen media on whatever platform he chooses to watch it with.

The original media on contrast is very much limited:

The source of the content is always the disc the user bought. It’s not possible to store legally acquired HD-content on a different medium than the source disc. It’s not possible to watch it on any personal computer but the ones running operating systems from Microsoft. The disc may even force the legal user to watch advertisements or trailer in advance to the main content. There is no guarantee that a purchased disc will work with any player – despite player and disc both bearing the same compatibility label (HD-DVD or Blueray logos). It’s not possible to legally acquire the content on-demand and it’s impossible to reformat the content to different devices.

Back in the old days, the copy usually was inferior to the original.

In the digital age of DRM and user-money-milking, this has changed. Now the copy clearly provides many advantages the original currently can’t provide or the industry does not want it to provide.

I salute the incredibly smart hackers that worked around yet another “unbreakable” copy protection scheme allowing me to create my personal backup copy of any medium I buy so that I can store the content on my NAS and I have the assurance that I’m able to play it when I want and where I want.

I assure you: My happyness is not based on the fact that I can now downloaded pirated movies over bittorrent. It’s based on the fact that I can store legally purchased HD content on the harddrive of my home server and watch it on-demand without having to switch media.

Piracy, for me, is a pure usability problem.