Any Eclipse users out there?

Usually I’m not here to ask questions, but today I have two for my readers. Maybe someone can help?

It’s about Eclipse:

  • Is there a way to automatically switch back to the Java-perspective after a debugging-session ended?
  • Can the complete JDK-Documentation somehow be integrated into the help system? While I know there is Javadoc everywhere while writing code, using the full-text search capabilities of the help-system would be really nice here and then…

I’m quite sure, both problems can be solved. I’m just not seeing where. And additionally I’ve quite some problems devising useful google keywords to find a solution.

So, I thought: Maybe some of my readers know Eclipse better than I do.

Any help is appreciated..

Found on my iMac

Today, I found a residual registration link lingering around in my home-directory of my iMac. Looking at it’s contents with cat reveals quite an ordinary .plist-XML-file.

What’s interesting is what the engineers at Apple obviously thought of the newsletters the user is given a chance to subscribe to:


(the emphasis is mine)

Oh… how I agree with them!

Delphi 2005

I got my hands on the demo-version of Delphi 2005 (download it here), and I actually have configured the beast already, so I have my usual environement to work on PopScan with it. These are my first impressions (I won’t talk about this File-Download-Window-Popping-Up Problem as all know it’s a nasty problem with a security-patch from Microsoft which will soon be fixed. Read about it here on Steves blog

  • It takes quite some time to start up. After removing the Delphi.NET and c# personalities (don’t need them), it starts about as fast, als my Delphi 7 did. Just a little bit slower
  • The compiler got faster, if you ask me.
  • Besides the great new features Borland is talking about, there are very nice usability-tunings everywhere which make working quite a bit easier.
  • The VCL form designer is extremely slow on my machine. Just displaying the PopScan Main Form within the designer takes nearly 10 Seconds. Delphi 7 does that instantly.
  • The debugger is slower too, which certainly has to do with the many great feature additions. I can live with that.
  • It’s extremely compatible to Delphi 7: I could install every single third party component without any problems. This is quite impressive considering Delphi 2005 is quite a rewrite.
  • While I like the new docked form designer, there’s one usability-problem with it: When you have components that use their own property editors (like Toolbar 2000), those editors are opened in their own window (understandable). Now, if you select a button in the component editor and then click the Project inspector to change a property, the Delphi Main Window will cover the property editor rendring it invisible. An easy fix would be to define the property editor always-on-top – a better fix would be integrating it somewhere in the IDE
  • Even JCLDebug could be compiled and installed (even the IDE Expert did work, though you have to manually install it)

All in all, this release of Delphi is a very great release providing the user with a ton of new features and fixes to long-standing usability problems (so long that you got used to them and now are missing them…). I have not expirienced any crashes so far (besides the one where the expat-parser of a debugged application took all the ram on my system, but I don’t blame Delphi for that), which is very nice.

Now, if only the beast could be made to run a bit faster (which will be done, I’d say, it’s the best Delphi since Delphi 2 which means quite a lot…

Thanks Borland.

PS: I know that it’s currently more in fashion to bash Borland and to whine about everything they do. And for the fourth consecutive year now I read posting about Delphi’s impending doom everywhere on the net. But consider this: Delphi still is the only RAD tool out there producing 100% native windows executables. And it still has one of the most lively communities I know of in the Windows-world. Even if Borland would kill off delphi, I’m quite certain, it will not go so easily. Not with this community.

On and speaking of killing off delphi: Seeing this great release of Delphi 2005, I am quite assured that Borland will continue supporting us.

So: Quit whining around!


Imagine, you are working on a webshop.

Imagine further, that you have a page displaying the users shoppingcart. Left of each entry, there’s an <input type="text"> for letting the use change the quantity of the article. Till now quite a common scenario, isn’t it?

Now in the time of DHTML and all that, you write some JavaScript to automatically recalculate the grand total of your shoppingcart on-the-fly, as the user is changing the quantities. This is very nice, as the user gets immediate response to her actions. No reloading the page is involved.

Now imagine further that the user has changed quite some quantities. The new cart is nothing like the old one was. The user is very happy with the total recalulating itself on every key she presses while the focus is in one of those editfieds. Very nice.

Now the user realizes that she needs another product. She clicks on the “Browse”-Link and …

What happens?

Well,… the link certainly works and she browses around in the shop looking for another product to order. But there’s a serious problem lurking around: As all the calculations were done on the client when the user changed the quantities, the server knows nothing about the changes. The server still thinks (provided something like a HTTP-Session-Emulation being at work – but how would you implement a shopping-cart without it?) the quantities are unchanged. When the user looks at the cart the next time (even after reloading the cart-page), she will see all the old values.

How to fix this? (Jonas, if you read this entry: This is about the solution to a problem we faced about a year ago while working on PopScan SMB). Most common today is one of the following:

  • Post the form on every change of the quantity. While this fixes the problem, it’s not very convinient for the user – especially if she uses a slow modem link. And even if the link is fast: Reloading the page whenever I’m tabbing out of an edit-field is very disturbing (though I’ve seen sites where the page even reloads on every key press).
  • Don’t recalculate anything, but provide an “Update values”-Button. This is what most users are used to as this is how the web worked so far: You enter something, you submit it to the server or you lose it.

Now this is where XMLHTTP comes to play.

While it has XML in it’s name, it has very less to do with XML. It’s a technology to send HTTP-Requests from JavaScript. And not only that: The requests are sent completely transparent to the end-user in the background. She doesn’t notice the slightest thing while the script is posting requests. As the API is asynchronous, there even is no waiting involved – not even over slow lines.

So.. how does it work?

I used this function to post back quantity-changes from my shoppingcart:

function updateToServer(quant,art){
    var xmlhttp=false;
    /*@cc_on @*/
    /*@if (@_jscript_version >= 5)
    // JScript gives us Conditional compilation, we can cope
    // with old IE versions and security blocked creation of
    // the objects.
     try {
      xmlhttp = new ActiveXObject("Msxml2.XMLHTTP");
     } catch (e) {
      try {
       xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
      } catch (E) {
       xmlhttp = false;
    @end @*/
    if (!xmlhttp && typeof XMLHttpRequest!='undefined') {
      xmlhttp = new XMLHttpRequest();
    }"GET", "/index.php/order/qchg?a="+encodeURI(art)+

    /* not interested in feedback. if it doesn't work, too bad. other
       methods provide fallback
    xmlhttp.onreadystatechange=function() {
      if (xmlhttp.readyState==4) {


(disclaimer: much of the code comes from this page. If you know, what you are doing, copy&paste really is a timesaver.)

What does it do?

  1. It uses some IE-trickery with conditional code to instantiate the object.
  2. If the IE-code does not get run (on every standards-compliant browser), it uses the common way to instantiate the thing
  3. It prepares the request
  4. It sets up some event-handlers. As I’m not interested in the outcome, I’m not setting up anything.

As you can see, I created a special url for accessing my shop-system, just for updating the quantities.

This function is called from the onChange-event of the quantity-change-input-boxes. Now, whenever the user changes a quantity, /index.php/order/qchg is called, advising the server to update the quantity (if you find the URL strange – using PATH_INFO and all that: I will post something about a PHP-design-pattern that I’m using that has proven to be the most powerful in all those years I’ve been working with PHP).

Problem solved.

And just 30 minutes after implementing this method, I found out that for the purpose I’m using it, this whole XMLHTTP-thing would not be necessary:

While some trickery with FRAMEs could do the same thing, the really best method that even works with Netscape 4.x (even 3.x, if I remember correctly) would be to conditionally change the URL of a (transparent 1px2) image. This works always if no feedback from the script must be evaluated:


function updateToServer(quant, art){

A one-liner, no frame-trickery (frames are bad – even for such things), no finding out what object to instantiate, no problems with near-browsers,… very nice, but nowhere near structural markup, which is why I prefer the less hacky solution.

I hope, this was helpful for you. And as I’m progressing with this very interesting project I’m working on, I certianly will have more of such things to post.

PostgreSQL rocks!

I told so before, but I have to again: PostgreSQL is incredibly cool.

Today I had this job of importing around 11’000’000 datasets distributed to 15 tables. 10 millions of them went into one big table. And after importing the whole thing should still respond fast to queries involving JOINS with this large table.

What surprises me: After a bit tweaking of the settings (one of them would be moving the beast to a partition where there’s enough space on to store the indexes ;-) ), the queries I did on a much smaller amount of data before, remain as fast as ever. PostgreSQL really makes great use of its indexes

Granted: Importing all those datasets was somewhat slow (I could and can not use COPY because I’m just receiving differences), but tweaking around with the indexes helped a lot (tip: drop them while inserting)

While processing the import, Postgres still was as responsive as ever while working with other parts of the database.

I know that all this is nothing fancy – I mean: I expect nothing less from a good RDBMS, but still… it’s amazing how good and flawless this worked and how fast it is.

Maybe, I could get faster INSERT/UPDATE performance if I’d be using MySQL instead, but I absolutely want to use all those features a real database should have that MySQL lacks: Views, referential integrity, subselects (still using 4.0 until Gentoo releases a more current ebuild).

Yes. Postgres is just great.

Just for your interest:

pilif@fangorn /home % sudo du -chs pgdata
1.8G    pgdata
1.8G    total

And it’s still as fast as your common little webboard-application. I still cannot quite believe it.

PostgreSQL vs. MySQL – a subjective view

Still quite enthusiastic about my success with PostgreSQL erlier today and after reading the first comment on that entry, I think, it’s time for a little list describing the highlights why I prefer PostgreSQL to mySQL and another one describing what mySQL does better:


  • psql, the command line tool for accessing the database is much better than the mySQL pendant. What many don’t seem to know is x. Try it and you will ask yourself, why mysql can’t do that. Also, I really like that a pager is invoked when dealing with large result sets. MySQL does not do that either
  • The license. While I certailny prefer any free software license to any proprietary one, I much prefer the more free BSD one. But I better leave the flam^Wphilosophying about this to others…
  • All those “professional” database-features like VIEWs, stored procedures (which can even be written in Perl or Python), triggers, rules, enforced referential integrity and all that stuff. I could never ever imagine going back to a database without VIEWs. Those things are so incredibly useful both for much friendlier interface to complex data and integrating different pieces of software.
  • The community around PostgreSQL is very strong. Reading the “general” and “developers” mailinglists is very interesting and many times provides a very good insight in database design

Back in 2002 where I was working on the new, I used VIEWs to merge satisfy the needs both PostNuke and phpBB2 had concerning their table containing the user accounts. With a view and a little bit cusomized scripting I was able to integrate both without the need for any patching around in either of them which makes applying security-updates so much easier. This is where I deceided that I will never use anything else but PostgreSQL for my database needs.

The mySQL list

  • mysqli is an object oriented interface for PHP scripts directly built into the language (and thus fast). Too bad it requires MySQL 4.1 as Gentoo does not have fitting ebuilds yet. And don’t get me wrong: Postgres’ interface is not bad either.
  • Seems easier to handle. Just install and run. ALTER TABLE is much more powerful than in PostgreSQL, so changing the structure after the fact is easy. Nothing must be configured to get quite the optimum performance
  • Clustering built into the core of the database, though it’s still a master-slave replication which provides fail-safety, but no (real) load balancing.

ALTER TABLE in PostgreSQL 8 is about as powerful as the one of MySQL, but PostgreSQL 8 suffers from the same problem as MySQL 4.1: No Gentoo ebuild. Here, on my iMac I’m already running the latest BETA of 8.0

The decision to go with PostgreSQL is an easy one: None of the advantages of MySQL are big enough to outweigh the missing features. Oh and if you ask for benchmarks and tell me that PostgreSQL is slower than MySQL, let me tell you this: While I doubt that this statement is still true (mySQL got slower due to the transaction support and PostgreSQL got much faster), I can say one thing for certain: PostgreSQL is fast enough for my needs. What is it worth giving up data integrity and writing lots of dirty code that should really be stored directly in the database just because of a percent more performance or so?

Another thing is how those systems perform under high load. While I certainly know that PostgreSQL handles it well and stays fast for many more concurrent connections, I always hear problems form people using mySQL: Corrupted tables (sometimes beyond repair), hanging connections,… Nothing I want to happen to me even if it would mean to live with one or two percent less performance under unrealistical-benchmarky load.

Oh and everything I told about performance is quite un-scientific. While I did some load-tests with Postgres, all my expirience with MySQL under same conditions comes from other people. I never tried it myself. Why should I? PostgreSQL is perfect.