Any Eclipse users out there?

Usually I’m not here to ask questions, but today I have two for my readers. Maybe someone can help?

It’s about Eclipse:

  • Is there a way to automatically switch back to the Java-perspective after a debugging-session ended?
  • Can the complete JDK-Documentation somehow be integrated into the help system? While I know there is Javadoc everywhere while writing code, using the full-text search capabilities of the help-system would be really nice here and then…

I’m quite sure, both problems can be solved. I’m just not seeing where. And additionally I’ve quite some problems devising useful google keywords to find a solution.

So, I thought: Maybe some of my readers know Eclipse better than I do.

Any help is appreciated..

Found on my iMac

Today, I found a residual registration link lingering around in my home-directory of my iMac. Looking at it’s contents with cat reveals quite an ordinary .plist-XML-file.

What’s interesting is what the engineers at Apple obviously thought of the newsletters the user is given a chance to subscribe to:

        <key>RegistrationInfo</key>
        <dict>
                <key>AppleSpam</key>
                <string>NO</string>
                <key>Location</key>
                <string>B</string>
                <key>Occupation</key>
                <string>5</string>
                <key>OthersSpam</key>
                <string>NO</string>
        </dict>

(the emphasis is mine)

Oh… how I agree with them!

Delphi 2005

I got my hands on the demo-version of Delphi 2005 (download it here), and I actually have configured the beast already, so I have my usual environement to work on PopScan with it. These are my first impressions (I won’t talk about this File-Download-Window-Popping-Up Problem as all know it’s a nasty problem with a security-patch from Microsoft which will soon be fixed. Read about it here on Steves blog

  • It takes quite some time to start up. After removing the Delphi.NET and c# personalities (don’t need them), it starts about as fast, als my Delphi 7 did. Just a little bit slower
  • The compiler got faster, if you ask me.
  • Besides the great new features Borland is talking about, there are very nice usability-tunings everywhere which make working quite a bit easier.
  • The VCL form designer is extremely slow on my machine. Just displaying the PopScan Main Form within the designer takes nearly 10 Seconds. Delphi 7 does that instantly.
  • The debugger is slower too, which certainly has to do with the many great feature additions. I can live with that.
  • It’s extremely compatible to Delphi 7: I could install every single third party component without any problems. This is quite impressive considering Delphi 2005 is quite a rewrite.
  • While I like the new docked form designer, there’s one usability-problem with it: When you have components that use their own property editors (like Toolbar 2000), those editors are opened in their own window (understandable). Now, if you select a button in the component editor and then click the Project inspector to change a property, the Delphi Main Window will cover the property editor rendring it invisible. An easy fix would be to define the property editor always-on-top – a better fix would be integrating it somewhere in the IDE
  • Even JCLDebug could be compiled and installed (even the IDE Expert did work, though you have to manually install it)

All in all, this release of Delphi is a very great release providing the user with a ton of new features and fixes to long-standing usability problems (so long that you got used to them and now are missing them…). I have not expirienced any crashes so far (besides the one where the expat-parser of a debugged application took all the ram on my system, but I don’t blame Delphi for that), which is very nice.

Now, if only the beast could be made to run a bit faster (which will be done, I’d say, it’s the best Delphi since Delphi 2 which means quite a lot…

Thanks Borland.

PS: I know that it’s currently more in fashion to bash Borland and to whine about everything they do. And for the fourth consecutive year now I read posting about Delphi’s impending doom everywhere on the net. But consider this: Delphi still is the only RAD tool out there producing 100% native windows executables. And it still has one of the most lively communities I know of in the Windows-world. Even if Borland would kill off delphi, I’m quite certain, it will not go so easily. Not with this community.

On and speaking of killing off delphi: Seeing this great release of Delphi 2005, I am quite assured that Borland will continue supporting us.

So: Quit whining around!

XMLHTTP

Imagine, you are working on a webshop.

Imagine further, that you have a page displaying the users shoppingcart. Left of each entry, there’s an <input type="text"> for letting the use change the quantity of the article. Till now quite a common scenario, isn’t it?

Now in the time of DHTML and all that, you write some JavaScript to automatically recalculate the grand total of your shoppingcart on-the-fly, as the user is changing the quantities. This is very nice, as the user gets immediate response to her actions. No reloading the page is involved.

Now imagine further that the user has changed quite some quantities. The new cart is nothing like the old one was. The user is very happy with the total recalulating itself on every key she presses while the focus is in one of those editfieds. Very nice.

Now the user realizes that she needs another product. She clicks on the “Browse”-Link and …

What happens?

Well,… the link certainly works and she browses around in the shop looking for another product to order. But there’s a serious problem lurking around: As all the calculations were done on the client when the user changed the quantities, the server knows nothing about the changes. The server still thinks (provided something like a HTTP-Session-Emulation being at work – but how would you implement a shopping-cart without it?) the quantities are unchanged. When the user looks at the cart the next time (even after reloading the cart-page), she will see all the old values.

How to fix this? (Jonas, if you read this entry: This is about the solution to a problem we faced about a year ago while working on PopScan SMB). Most common today is one of the following:

  • Post the form on every change of the quantity. While this fixes the problem, it’s not very convinient for the user – especially if she uses a slow modem link. And even if the link is fast: Reloading the page whenever I’m tabbing out of an edit-field is very disturbing (though I’ve seen sites where the page even reloads on every key press).
  • Don’t recalculate anything, but provide an “Update values”-Button. This is what most users are used to as this is how the web worked so far: You enter something, you submit it to the server or you lose it.

Now this is where XMLHTTP comes to play.

While it has XML in it’s name, it has very less to do with XML. It’s a technology to send HTTP-Requests from JavaScript. And not only that: The requests are sent completely transparent to the end-user in the background. She doesn’t notice the slightest thing while the script is posting requests. As the API is asynchronous, there even is no waiting involved – not even over slow lines.

So.. how does it work?

I used this function to post back quantity-changes from my shoppingcart:

function updateToServer(quant,art){
    var xmlhttp=false;
    /*@cc_on @*/
    /*@if (@_jscript_version >= 5)
    // JScript gives us Conditional compilation, we can cope
    // with old IE versions and security blocked creation of
    // the objects.
     try {
      xmlhttp = new ActiveXObject("Msxml2.XMLHTTP");
     } catch (e) {
      try {
       xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
      } catch (E) {
       xmlhttp = false;
      }
     }
    @end @*/
    if (!xmlhttp && typeof XMLHttpRequest!='undefined') {
      xmlhttp = new XMLHttpRequest();
    }

    xmlhttp.open("GET", "/index.php/order/qchg?a="+encodeURI(art)+
           "&q="+encodeURI(quant),true);

    /* not interested in feedback. if it doesn't work, too bad. other
       methods provide fallback
    xmlhttp.onreadystatechange=function() {
      if (xmlhttp.readyState==4) {
        alert(xmlhttp.responseText)
      }

    }*/
 xmlhttp.send(null)
}

(disclaimer: much of the code comes from this page. If you know, what you are doing, copy&paste really is a timesaver.)

What does it do?

  1. It uses some IE-trickery with conditional code to instantiate the object.
  2. If the IE-code does not get run (on every standards-compliant browser), it uses the common way to instantiate the thing
  3. It prepares the request
  4. It sets up some event-handlers. As I’m not interested in the outcome, I’m not setting up anything.

As you can see, I created a special url for accessing my shop-system, just for updating the quantities.

This function is called from the onChange-event of the quantity-change-input-boxes. Now, whenever the user changes a quantity, /index.php/order/qchg is called, advising the server to update the quantity (if you find the URL strange – using PATH_INFO and all that: I will post something about a PHP-design-pattern that I’m using that has proven to be the most powerful in all those years I’ve been working with PHP).

Problem solved.

And just 30 minutes after implementing this method, I found out that for the purpose I’m using it, this whole XMLHTTP-thing would not be necessary:

While some trickery with FRAMEs could do the same thing, the really best method that even works with Netscape 4.x (even 3.x, if I remember correctly) would be to conditionally change the URL of a (transparent 1px2) image. This works always if no feedback from the script must be evaluated:

Pseudocode:

function updateToServer(quant, art){
 document.images['qposter'].src="/index.php/order/qchg?a="+encodeURI(art)+
             "&q="+encodeURI(quant);
}

A one-liner, no frame-trickery (frames are bad – even for such things), no finding out what object to instantiate, no problems with near-browsers,… very nice, but nowhere near structural markup, which is why I prefer the less hacky solution.

I hope, this was helpful for you. And as I’m progressing with this very interesting project I’m working on, I certianly will have more of such things to post.

PostgreSQL rocks!

I told so before, but I have to again: PostgreSQL is incredibly cool.

Today I had this job of importing around 11’000’000 datasets distributed to 15 tables. 10 millions of them went into one big table. And after importing the whole thing should still respond fast to queries involving JOINS with this large table.

What surprises me: After a bit tweaking of the settings (one of them would be moving the beast to a partition where there’s enough space on to store the indexes ;-) ), the queries I did on a much smaller amount of data before, remain as fast as ever. PostgreSQL really makes great use of its indexes

Granted: Importing all those datasets was somewhat slow (I could and can not use COPY because I’m just receiving differences), but tweaking around with the indexes helped a lot (tip: drop them while inserting)

While processing the import, Postgres still was as responsive as ever while working with other parts of the database.

I know that all this is nothing fancy – I mean: I expect nothing less from a good RDBMS, but still… it’s amazing how good and flawless this worked and how fast it is.

Maybe, I could get faster INSERT/UPDATE performance if I’d be using MySQL instead, but I absolutely want to use all those features a real database should have that MySQL lacks: Views, referential integrity, subselects (still using 4.0 until Gentoo releases a more current ebuild).

Yes. Postgres is just great.

Just for your interest:

pilif@fangorn /home % sudo du -chs pgdata
1.8G    pgdata
1.8G    total

And it’s still as fast as your common little webboard-application. I still cannot quite believe it.

PostgreSQL vs. MySQL – a subjective view

Still quite enthusiastic about my success with PostgreSQL erlier today and after reading the first comment on that entry, I think, it’s time for a little list describing the highlights why I prefer PostgreSQL to mySQL and another one describing what mySQL does better:

PostgreSQL

  • psql, the command line tool for accessing the database is much better than the mySQL pendant. What many don’t seem to know is x. Try it and you will ask yourself, why mysql can’t do that. Also, I really like that a pager is invoked when dealing with large result sets. MySQL does not do that either
  • The license. While I certailny prefer any free software license to any proprietary one, I much prefer the more free BSD one. But I better leave the flam^Wphilosophying about this to others…
  • All those “professional” database-features like VIEWs, stored procedures (which can even be written in Perl or Python), triggers, rules, enforced referential integrity and all that stuff. I could never ever imagine going back to a database without VIEWs. Those things are so incredibly useful both for much friendlier interface to complex data and integrating different pieces of software.
  • The community around PostgreSQL is very strong. Reading the “general” and “developers” mailinglists is very interesting and many times provides a very good insight in database design

Back in 2002 where I was working on the new adsl.ch, I used VIEWs to merge satisfy the needs both PostNuke and phpBB2 had concerning their table containing the user accounts. With a view and a little bit cusomized scripting I was able to integrate both without the need for any patching around in either of them which makes applying security-updates so much easier. This is where I deceided that I will never use anything else but PostgreSQL for my database needs.

The mySQL list

  • mysqli is an object oriented interface for PHP scripts directly built into the language (and thus fast). Too bad it requires MySQL 4.1 as Gentoo does not have fitting ebuilds yet. And don’t get me wrong: Postgres’ interface is not bad either.
  • Seems easier to handle. Just install and run. ALTER TABLE is much more powerful than in PostgreSQL, so changing the structure after the fact is easy. Nothing must be configured to get quite the optimum performance
  • Clustering built into the core of the database, though it’s still a master-slave replication which provides fail-safety, but no (real) load balancing.

ALTER TABLE in PostgreSQL 8 is about as powerful as the one of MySQL, but PostgreSQL 8 suffers from the same problem as MySQL 4.1: No Gentoo ebuild. Here, on my iMac I’m already running the latest BETA of 8.0

The decision to go with PostgreSQL is an easy one: None of the advantages of MySQL are big enough to outweigh the missing features. Oh and if you ask for benchmarks and tell me that PostgreSQL is slower than MySQL, let me tell you this: While I doubt that this statement is still true (mySQL got slower due to the transaction support and PostgreSQL got much faster), I can say one thing for certain: PostgreSQL is fast enough for my needs. What is it worth giving up data integrity and writing lots of dirty code that should really be stored directly in the database just because of a percent more performance or so?

Another thing is how those systems perform under high load. While I certainly know that PostgreSQL handles it well and stays fast for many more concurrent connections, I always hear problems form people using mySQL: Corrupted tables (sometimes beyond repair), hanging connections,… Nothing I want to happen to me even if it would mean to live with one or two percent less performance under unrealistical-benchmarky load.

Oh and everything I told about performance is quite un-scientific. While I did some load-tests with Postgres, all my expirience with MySQL under same conditions comes from other people. I never tried it myself. Why should I? PostgreSQL is perfect.

AirPort basesation and external DHCP server

Recently, I bought an airport basestation.

I wanted to use it as a NAT router and a wireless access point. DNS and DHCP I wanted to do via a fully-fledged BIND/dhcpd combination running on my iMac.

DNS I need because I’m doing some work for the office from home. As much of it is web based, I need virtual hosts on my server and I certainly don’t want to go back to stone age and move around hosts files. DNS was invented for something, so please, let me use it.

DHCP I wanted because sometimes, I’m using applications on my notebook that require some ports forwarded to them (bittorrent for example). Forwarding ports without fixed IP-adresses can be difficult (especially if changing the forwarding address requires a restart of the router), so I wanted the possibility to give the MAC-adress of my notebooks NIC a fixed IP-address. This is not possible with airports built-in DHCP server (and I don’t blame them for this – it’s quite a special feature)

Now, imagine how disappointed I was seing, that this is not possible when using Apples configuration program:

They tie NAT and DHCP together: Either you turn off both NAT and DHCP, NAT only, or none of them. Turning off DHCP only is not possible.

Looking around on the web, I came across Jon Sevys Java Based Configurator again.

With this tool my configuration certainly is possible:

  1. Configure your basestation using Apples utility. Tell it to enable NAT and distribute IP-Adresses
  2. Update the configuration and exit Apples utility.
  3. Run the Java Based configurator.
  4. On the “DHCP Functions”-Tab, unckeck the Checkbox
  5. On the “Bridging Functions”-Tab uncheck “Disable bridging between Ethernet and wireless Lan”
  6. Save the configuration.

    The last step is important if you want the Basestation to continue working as an usable wireless access point. I forgot to do this the first time I tried and did not get an IP-Adress and could not connect to the wired lan after setting one manually either. Logical, but disturbing if you think you got the solution but it still does not work as expected…

AC3-Divx on my PPC

As I’ve written in the review of my hx4700 PDA, the thing really shines when it comes to displaying XViD videos.

The single big problem about Betaplayer is that it lacks support for decoding AC3-streams. This is bad, as most of my movies have an AC3 audio stream (always looking for the optimum quality). So I was on the lookout for a solution.

I quickly found PockedDivxEncode which comes with nice presets for encoding videos for a PocketPC

The problem was that the current version always insists to recoding the video stream when converting the video file, thus reducing the overall quality (it’s no use compressing two times) and needing a long time to do its job (about 50% realtime or slower. Haven’t tried).

Then, on the download-page I found this not-so-visible link to the the current beta test version, which has – under “Advanced Settings” an option to leave the video stream alone and just work on the audio stream.

Using this configuration, recoding just the AC3 stream becomes possible. As it’s leaving the video alone, it’s reasonably fast too – about 4 times realtime on my thinkpad.

This is a usable solution until Betaplayer gets AC3-Support.

Pile of new hardware

Last wednesday, I finally did what I have been talking since the very beginning of this blog: I bought myself a Mac. In the end, what lead me to the decision was that I wanted my home server back. I had some requirements to the new hardware:

  • It must run quietly. I don’t have enough space to designate a dedicated server-room so the server – if constantly running, must be quiet. This is where my older solution failed.
  • It must run a UNIX derivate. Much of the work I do requires a UNIX server. Having such a beast at home can save me from going to the office here and then.

The iMac (17inch, 1.8Ghz) I finally bought fullfills those two requirements (it’s quite quiet as long as I’m not doing anything calcualation intensive – which I’m not – at least not when sleeping) and has the additional benefit of a cool UI frontend.

So in the end, this was a logical decision: Had I deceided to go with a quiet Linux box, the parts alone would have been more expensive – not to speak of the time required to assemble the beast.

Setting up the iMac was easy (as I’ve expected). First, I wanted to go with Gentoo for MacOS X, but this is extremely under construction, so I went with Fink for my UNIX-needs

Now I’m running a DNS, DHCP, PostgreSQL and Apache Server. All I need to do my work.

So. After my UNIXish needs where fulfilled, there came the Macish ones: I wanted to video-iChat with my girfriend. This has proven to be quite a hassle to set up:

We never managed to get a working connection. I got timeouts everytime I tried to connect. A bit debugging around on my ZyWall router quickly determined it as the culprit: Despite me having configured the iMac as default NAT-Server, the device did not forward any UDP packets to the host. No wonder this does not work.

So it was time for my now nearly 4 years old ZyWall to be replaced (lately it began crashing quite often anyways). As I did not want to take any further risks, I bought an Airport Basesation. Works nicely – also with my other gear (PocketPC and Thinkpad).

Furthermore, it became clear to me that I finally have a continously running server in my home, so I finally could (at least somewhat) justify buying myself a Squeezebox. This device arrived only today and while I knew how great the thing is, it suprised me even more now that I had a look at it: So many settings to tweak and so great quality of the hardware. Very good.

In the end, the last week (ending today) was quite hardware-intensive:

  • The iMac
  • Two iSights (one for me, another for my girfriend). Speaking of iSight. I’ve just noticed that the iMac has a magnet in the middle of the screen to be used with the iSight magnet holder to position it nicely in the middle of the screen. Very nice
  • An Airport Extreme Basestation. Not that I wanted one, but the investement into the iSights would have been in vain as it’s technically impossible to video-chat over the ZyWall
  • The squeezebox
  • All in all quite a lot of junk, but so much fun to play with ;-)