Windows Media Encoder: File not found

Today I have come across an installation of a Windows Media Encoder that refused to actually encode media. Whenever I started the encoding process, the encoder quit with the error 0x80070002 and gave the very helpful unformation that “the system cannot find the file specified”.

The problem appeared quite suddenly after working perfectly fine for the last three months. As the system is behind a very air-tight firewall and is the only machine in the network segment (aside of some IP cameras), the system hasn’t even been updated via Windows Update. So I have to conclude, that the problem appeared out of the blue. One day it worked, the next it stopped working.

I’ve tried everything to fix this (the encoder in question was encoding a live stream for a client of ours): From reinstalling the Axis capture driver to reinstalling Windows Media Encoder – nothing worked – the error message stayed the same.

Even googling proved all but helpful: There are quite many pages apparently mirroring all and the same MSDN forum on which someone actually posted the same problem but never got an answer. How annoying is that? You find 10 or more hits, everyone having your problem right in the title and everyone on a different page, but in the end, it’s all the same posting mirrored by different sites and plastered with advertisements.

On a hunch though, I have deleted “%Localappdata%MicrosoftWindows Media” and “%Localappdata%MicrosoftWindows Media Player” seeing that these folders stayed intact after a reinstallation while also being somewhat Windows media related.

Of course that helped!

So if you ever are in the same problem and Media Encoder suddenly stops encoding, it’s maybe caused by a corrupted cache of sorts. In that case, remove the cache and be encoding again, but note though, that if you are on a client machine with all your media on, removing these folders may be unwise as they could contain some meta information about your media.

In my case that didn’t matter though.

Dropbox

Dropbox is cloud storage on the next level: You install their little application – available for Linux, Mac OS X and Windows – which will create a folder which will automatically be kept synchronized between all the computers where you have installed that little application on.

Because it synchronizes in the background and always keeps the local copy around, the access-speed isn’t different from a normal local folder – mainly because it is, after all, a local folder you are accessing. Dropbox is not one of these slow “online hard drives” it’s more like rsync in the background (and rsync it is – the application is intelligent enough to only transmit deltas – even from binary files).

They do provide you with a web interface of course, but the synchronizing aspect is the most interesting.

The synchronized data ends up somewhere in Amazon’s S3 service, which is fine with me.

Unfortunately, while the data stored in an encrypted fashion on S3, the key is generated by the Dropbox server and thus known to them, which makes Dropbox completely unusable for sensitive unencrypted data. They do state in the FAQ that this will maybe change sometime in the future, but for not it is as it is.

Still, I found some use for Dropbox: ~/Library/Preferences, ~/.zshrc and ~/.ssh all are now stored in ~/Dropbox/System and symlinked back to their original place. This means that a large chunk of my user profile is availalbe on all the computers I’m working on. I would even try the same trick with ~/Library/Application Support, but that seems risky due to the missing encryption and due to the fact that Application Support sometimes contains database files which get corrupted for sure when moved around while they are open – like the Firefox profile.

This naturally even works when the internet connection is down – DropBox synchronizes changes locally, so when the internet (or Dropbox) is down, I just have the most recent copy of when the service was still working – that’s more than good enough.

Another use that comes to mind for Dropbox storage are game save files or addons you’d want to have access to on every computer you are using – just move your stuff to ~/Dropbox and symlink it back to the original place.

Very convenient.

Now if only they’d provide me with a way to provide my own encryption key. That way I would instantly buy the pro account with 25GB of storage and move lots and lots of data in there.

Dropbox is the answer to the ever increasing amount of computers in my life because now I don’t care about setting up the same stuff over and over again. It’s just there and ready. Very helpful.

Listen to your home music from the office

My MP3 collection is safely stored on shion, on a drobo mounted as /nas. Naturally, I want to listen to said music from the office – especially considering my fully routed VPN access between the office and my home infrastructure and the upstream which suffices for at least 10 concurrent 128bit streams (boy – technology has changed in the last few years – I remember the times where you couldn’t reliably stream 128 bit streams – let alone my 160/320 mp3s).

I’ve tried many things so far to make this happen:

  • serve the files with a tool like jinzora. This works, but I don’t really like jinzora’s web interface and I was never able to get it to work correctly on my Ubuntu box. I was able to trace it down to null bytes read from their tag parser, but the code is very convoluted and practically unreadable without putting quite some effort into that. Considering that I didn’t much like the interface in the first place, I didn’t want to invest time into that.
  • Use a SlimServer (now Squeezecenter) with a softsqueeze player. Even though I don’t use my squeezebox (an original model with the original slimdevices brand, not the newer Logitech one) any more because the integrated amplifier in the Sonos players works much better for my current setup. This solution worked quite ok, but the audio tends to stutter a bit at the beginning of tracks, indicating some buffering issues.
  • Use iTune’s integrated library sharing feature. This seemed both undoable and unpractical. Unpractical because it would force me to keep my main mac running all the time and undoable because iTunes sharing can’t pass subnet boundaries. Aside of that, it’s a wonderful solution as audio doesn’t stutter, I already know the interface and access is very quick and convenient.

But then I found out how to make the iTunes thing both very much doable and practical.

The network boundary problem can be solved using Network Beacon, a ZeroConf proxy. Start the application, create a new beacon. Chose any service name, use «_daap._tcp.» as service type, set the port number to 3689, enable the host proxy, keep the host name clear and enter the IP address of the system running iTunes (or firefly – see below).

Oh, and the target iTunes refuses to serve out data to machines in different subnets, so to be able to directly access a remote iTunes, you’d also have to set up an SSH tunnel.

Using Network Beacon, ZeroConf quickly begins working across any subnet boundaries.

The next problem was about the fact that I was forced to keep my main workstation running at home. I fixed that with Firefly Media Server for which even a pretty recent prebuilt package exists for Ubuntu (apt-get install mt-daapd).

I’ve installed that, configured iptables to drop packets for port 3689 on the external interface, configured Firefly to use the music share (which basically is a current backup of the itunes library of my main workstation – rsync for the win).

Firefly in this case even detects the existing iTunes playlists (as the music share is just a backup copy of my iTunes library – including the iTunes Library.xml), though smart playists don’t work, but can easily be recreated in the firefly web interface.

This means that I can access my complete home mp3 library from the office, stutter free, using an interface I’m well used to, without being forced to keep my main machine running all the time.

And it isn’t even that much of a hack and thus easy to rebuild should the need arise.

I’d love to not be forced to do the Network Beacon thing, but avahi doesn’t relay ZeroConf information across VPN interfaces.

My favourite asterisk feature

I’ve just included this into the context of the dialplan where the calls from and to our internal phones live.

[intercom]
exten => _55[6-8][1-9],1,SIPAddHeader("Call-Info: sip:asterisk;answer-after=0")
exten => _55[6-8][1-9],2,Dial(SIP/${EXTEN:2})

this is as useless as it is fun.

Too bad softphones are getting some real attention in the office lately, as they don’t support the answer-after feature and even if they did, where is the fun of just making yourself heard on the headphones of the victim as opposed to doing that directy on their speaker – loud enough to be heard in the whole office.

VoIP is fun and it’s about time I do more to our asterisk config but just watch it work and never fail.

Automatic language detection

If you write a website, do not use Geolocation to determine the language to display to your user.

If you write a desktop application, do not use the region setting to determine the language to display to your user.

This is incredibly annoying for some of us, especially for me which is why I’m ranting here.

The moment Google released their (awful) German translation for their RSS reader, I was served the German version just because I have a Swiss IP address.

Here in Switzerland, we actually speak one of three (or four, depending on who you ask) languages, so defaulting to German is probably not of much help for the people in the french speaking part.

Additionally, there are many users fluent in (at least reading) English. We always prefer the original language if at all possible because generally, translations never quite work. Even if you have the best translators at work, translated texts never feel fluid. Especially not when you are used to the original version.

So, Google, what were you thinking to switch me over to the German version of the reader? I have been using the English version for more than a year, so clearly, I understood enough of that language to be able to use it. More than 90% of the RSS feeds I’m subscribed to are, in fact, in English. Can you imagine how pissed I was to see the interface changed?

This is even worse on the iPhone/iPod frontend, because, there, you don’t even provide an option to change the language aside of manually hacking the URL.

Or take desktop applications. I live in the German speaking parts of Switzerland. True. So naturally I have set my locale settings to Swiss German. You know: I want to have the correct number formatting, I want my weeks to start on Mondays. I want the correct currency. I want my 24 hours clock I’m used to.

Actually, I also want the German week and month names, because I will be using these in most of my letters and documents, which are, in fact, German too.

But my OS installation is English. I am used to English. I prefer English. Why do so many programs insist to use the locale setting to determine the display language? Do you developers think it’s funny to have a mish-mash of languages on the screen? Don’t you think that me using an English OS version may be an indication that I do not want to read your crappy German translation alongside the English user interface of my OS?

Don’t you think that it feels really stupid to have a button in a German dialog box open another, English, dialog (the first one is from Chrome, the one that opens once you click “Zertifikate verwalten” (Manage certificates) is from Windows itself)?

In Chrome, I can at least fix the language – once I found the knob to turn. At first, it was easier for me to just delete the German localization file from the chrome installation because, due to being completely unused to German UIs, I was unable to find the right setting.

This is really annoying and I see this particular problem being neglected on an incredibly large scale. I know that I am a minority, but the problem is so terribly easy to fix:

  • All current browsers send an Accept-Language header. In contrast to the earlier times, nowadays, it is actually correctly preset in all the common browsers. Use that. Don’t use my IP-address.
  • Instead of reading the locale setting in my OS, ask the OS for its UI language and use that to determine which localization to load (actually, this is the recommended way of doing things according to Microsoft’s guidelines at least since Windows XP which was 2001).

Using these two simple tricks, you help a minority without hindering the majority in any way and without additional development overhead!

Actually, you’ll be getting away a lot cheaper than before. GeoIP is expensive if you want it to be accurate (and you do want that. Don’t you?), whereas there are ready-to-use libraries to determine the correct language even from the most complex Accept-Language-Header.

Asking the OS for the UI language isn’t harder than asking it for the locale, so no overhead there either.

Please, developers, please have mercy! Stop the annoyance! Stop it now!

VMWare Fusion Speed

This may be totally placebo, but I noticed that using Vista inside a VMWare Fusion VM has just turned from nearly unbearable slow to actually quite fast by updating from 2.0 Beta 2 to 2.0 Final.

It may very well be that the beta versions contained additional logging and/or debug code which was keeping the VM from reaching its fullest potential.

So if you are too lazy to upgrade and still running one of the Beta versions, you should consider updating. For me at least, it really brought a nice speed-up.

iTunes 8 visualization

Up until now I have not been a very big fan of iTunes’ visualization engine, probably because I’ve been spoiled with MilkDrop in my Winamp days (which still owns the old iTunes display on so many levels).

But with the release of iTunes 8 and their new visualization, I have to admit that, when you chose the right music (in this case it’s Liberi Fatali from Final Fantasy 8), you can really get something out of this.

The still picture really doesn’t do it justice, so I have created this video (it may be a bit small, but you’ll see what I’m getting at)  to visualize my point. Unfortunately, near the end it gets worse and worse, but the beginning is something of the more impressive shows I have ever seen generated out of this particular piece of music.

This may even beat MilkDrop and I could actually see myself assembling a playlist of some sort and put this thing on full screen.

Nice eyecandy!

… and back to Thunderbird

It has been a while since I’ve last posted about email – still a topic very close to my heart, be it on the server side or on the client side (though the server side generally works very well here, which is why I don’t post about it).

Waybackwhen, I’ve written about Becky! which is also where I’ve declared the points I deemed important in a mail client. A bit later, I’ve talked about The Bat!, but in the end, I’ve settled with Thunderbird, just to switch to Mac Mail when I’ve switched to the Mac.

After that came my excursion to Gmail, but now I’m back to Thunderbird again.

Why? After all, my Gmail review sounded very nice, didn’t it?

Well…

  • Gmail is blazingly fast once it’s loaded, but starting the browser and then gmail (it loads so slow that “starting (the) gmail (application)” is a valid term to use) is always slower than just keeping a mail client open on the desktop.
  • Google Calendar Sync sucks and we’re using Exchange/Outlook here (and are actually quite happy with it – for calendaring and address books – it sucks for mail, but it provides decent IMAP support), so there was no way for the other folks here to have a look at my calendar.
  • Gmail always posts a “Sender:”-Header when using a custom sender domain which technically is the right thing to do, but Outlook on the receiving end screws up by showing the mail as being “From xxx@gmail.com on behalf of xxx@domain.com” which isn’t really what I’d want.
  • Google’s contact management offering is sub par compared even to Exchange.
  • iPhone works better with Exchange than it does with Google (yes. iPhone, but that’s another story).
  • The cool Gmail MIDP client doesn’t work/isn’t needed on the iPhone, but originally was one of the main reasons for me to switch to Gmail.

The one thing I really loved about Gmail though was the option of having a clean inbox by providing means for archiving messages with just a single keyboard shortcut. Using a desktop mail client without that funcationality wouldn’t have been possible for me any more.

This is why I’ve installed Nostalgy, a Thunderbird extension allowing me to assign a “Move to Folder xxx” action to a single keystroke (y in my case – just like gmail).

Using Thunderbird over Mac Mail has its reasons in the performance and in the crazy idea of Mac Mail to always download all the messages. Thunderbird is no race horse, but Mail.app isn’t even a slug.

Lately, more and more interesting posts regarding the development of Thunderbird have appeared on Planet Mozilla, so I’m looking forward to see Thunderbird 3 taking shape in its revitalized form.

I’m all but conservative in my choice of applications and gadgets, but Mail  – also because of its importance for me – must work exactly as I want it. None of the solutions out there are doing that to the full extent, but TB certainly comes closest. Even after years of testing and trying out different solutions, TB is the thing that solves most of my requirements without adding new issues.

Gmail is splendid too, but it presents some shortcomings TB doesn’t come with.

Epic SSL fail

Today when I tried to use the fancy SSL VPN access a customer provided me with, I came across this epic fail:

Of all the things that can be wrong in a SSL certificate, this certificate manages to get them wrong. The self-signed(1) certificate was issued for the wrong host name(2) and it has expired(3) quite some time ago.

Granted: In this case the issue of trust is more or less constrained to the server to know who I am (I wasn’t intending on transferring any amount of sensitive data), but still – when you self-sign your certificate, the cost of issuing one for the correct host or issuing one with a very long validity becomes a non-issue.

Anyways – I had a laugh. And now you can have one too.

Simplest possible RPCs in PHP

After spending hours to find out why a particular combination of SoapClient in PHP itself and SOAP::Server from PEAR didn’t consistenly work together (sometimes, arrays passed around lost an arbitrary number of elements), I thought about what would be needed to make RPCs work form a PHP client to a PHP server.

I wanted nothing fancy and I certainly wanted as less an overhead as humanly possible.

This is what I came up with for the server:

<?php
header('Content-Type: text/plain');

require_once('a/file/containing/a/class/you/want/to/expose.php');

$method = str_replace('/', '', $_SERVER['PATH_INFO']);

if ($_SERVER['REQUEST_METHOD'] != 'POST'){
   sendResponse(array('state' =&gt; 'error', 'cause' =&gt; 'unsuppored HTTP method'));
}

$s = new MyServerObject();
$params = unserialize(file_get_contents('php://input'));
if ( ($res = call_user_func_array(array($s, $method), $params)) === false)
   sendResponse(array('state' => 'error', 'cause' => 'RPC failed'));
if (is_object($res))
   $res = get_object_vars($res);
sendResponse($res);

function sendResponse($resobj){
    echo serialize($resobj);
    exit;

}

?>

This client as shown below is a bit more complex, mainly because it contains some HTTP protocol logic. Logic, which could possibly be reduced to 2-3 lines of code if I’d use the CURL library, but the client in this case does not have the luxury of having access to such functionality.

Also, I’ve already had the function laying around (/me winks at domi), so that’s what I used (as opposed to file_get_contents with a pre-prepared stream context). This way, we DO have the advantage of learning a bit of how HTTP works and we are totally self-contained.

<?php
class Client{
    function __call($name, $args){
        $req = $this-&gt;openHTTPRequest('http://localhost:5436/restapi.php/'.$name, 'POST', array('Content-Type' =&gt; 'text/plain'), serialize($args));
        $data = unserialize(stream_get_contents($req['handle']));
        fclose($req['handle']);
        return $data;
    }
    private function openHTTPRequest($url, $method = 'GET', $additional_headers = null, $data = null){
        $parts = parse_url($url);

        $fp = fsockopen($parts['host'], $parts['port'] ? $parts['port'] : 80);
        fprintf($fp, "%s %s HTTP/1.1rn", $method, implode('?', array($parts['path'], $parts['query'])));
        fputs($fp, "Host: ".$parts['host']."rn");
        if ($data){
            fputs($fp, 'Content-Length: '.strlen($data)."rn");
        }
        if (is_array($additional_headers)){
            foreach($additional_headers as $name => $value){
                fprintf($fp, "%s: %srn", $name, $value);
            }
        }
        fputs($fp, "Connection: closernrn");
        if ($data)
            fputs($fp, "$datarn");

        // read away header
        $header = array();
        $response = "";
        while(!feof($fp)) {
            $line = trim(fgets($fp, 1024));
            if (empty($response)){
                $response = $line;
                continue;
            }
            if (empty($line)){
                break;
            }
            list($name, $value) = explode(':', $line, 2);
            $header[strtolower(trim($name))] = trim($value);
        }
        return array('response' => $response, 'header' => $header, 'handle' => $fp);
   }

}

$client = new Client();
$result = $client->someMethod(array('data' => 'even arrays work'));

?>

What you can’t pass around this way is objects (at least object which are not of type stdClass) as both client and server would need to have access to the prototype. Also, this seriously lacks error handling. But it generally works much better than what SOAP ever could accomplish.

Naturally, I give up stuff when compared to SOAP or any «real» RPC solution:

  • This one works only with PHP
  • It has limitations on what data structures can be passed around, though that’s aleviated by PHP’s incredibly strong array support.
  • It relies heavily on PHP’s loosely typed nature and thus probably isn’t as robust.

Still, protocols like SOAP (or even any protocol with either «simple» or «lightweight» in its name) tend to be so complicated that it’s incredibly hard if not impossible to create different implementations what still correctly work together in all cases.

In my case, where I have the problem of having to separate two pieces of the same application due to unstable third-party libraries which I would not want to have linked into every PHP instance running on that server for which the solution outlined above (plus some error handling code) works better than SOAP on so many levels:

  • it’s easily debuggable. No need for wireshark or comparable tools
  • client and server are written by me, so they are under my full control
  • it works all the time
  • it relies on as little functionality of PHP as possible and the functionality it depends on is widely used and tested, to I can assume that it’s reasonably bug-free (aside of my own bugs).
  • it’s a whole lot faster than SOAP, though this does not matter at all in this case.