protecting siri

Over the last weekend, 9to5mac.com posted about a hack which shows that it’s possible to run Siri on a iPhone 4 and
an iPod Touch 4g and possibly even oder devices – considering how much of Siri
is running on Apple’s servers.

We’ve always suspected that the decision to restrict Siri to the 4S is
basically a marketing decision and I don’t really care about this either.
Nobody is forcing you to use Siri and thus nobody is forcing you to update to
anything.

Siri is Apple’s product and so are the various iPhones. It’s their decision
whom they want to sell what to.

What I find more interesting is that it was even possible to have a hacked
Siri on a non 4S-phone talk to Apple’s servers. If I were in Apple’s shoes, I
would have made that (practically) impossible.

And here’s how:

Having a device that you put into users hands and trusting it is always a very
hard, if impossible thing to do as the device can (more or less) easily be
tampered with.

So to solve this problem, we need some component that we know reasonably well
to be safe from the user’s tampering and we need to find a way for that
component to prove to the server that indeed the component is available and
healthy.

I would do that using public key crypto and specialized hardware that works
like a TPM. So that would be a chip that contains a private key embedded in
hardware, likely not updatable. Also, that private key will never leave that
device. There is no API to read it.

The only API the chip provides is either a relatively high-level API to sign
an arbitrary binary blob or, more likely, a lower level one to encrypt some
small input (a SHA1 hash for example) with the private key.

OK. Now we have that device (also, it’s likely that the iPhone already has
something like that for its secured boot process). What’s next?

Next you make sure that the initial handshake with your servers requires that
device. Have the server post a challenge to the phone. Have the phone solve it
and have the response signed by that crypto device.

On your server, you will have the matching public key. If the signature checks
out, you talk to the device. If not, you don’t.

Now, it is possible using very expensive hardware to extract that key from the
hardware (by opening the chip’s casing and using a microscope and a lot of
skills). If you are really concerned about this, give each device a unique
private key. If a key gets compromised, blacklist it.

This greatly complicates the manufacturing process of course, so you might go
ahead with just one private key per hardware type and hope that cracking the
key will take longer than the lifetime of the hardware (which is very likely).

This isn’t at all specific to Siri of course. Whenever you have to trust a
device that you put into consumers hands, this is the way to go and I’m sure
we’ll be seeing more of this in the future (imagine the uses for copy
protection – let’s hope we don’t end up there).

I’m not particularly happy that this is possible, but I’d rather talk about it
than to hope that it’s never going to happen – it will and I’ll be pissed.

For now I’m just wondering why Apple wasn’t doing it to protect Siri.

OpenStreetMap

The last episode of FLOSS Weekly consisted of an interview with Steve Coast from OpenStreetMap. I knew about the project, but I was of the impression that it was in its infancy both content-wise and from a technical perspective.

During the interview I learned that it’s surprisingly complete (unless, of course, you need a map of Canada it seems) and highly advanced from a technical point of view.

But what’s really interesting is the fact how terribly easy it is to contribute. For smaller edits, you just click the edit-Link and use the Flash editor to paint a road or give it a name. If you need or want to do more, then there’s a really easy to use Java based editor:

First you drag a rectangle onto a pre-rendered version of the map which will cause the server to send you the vector information consisting of that part and then you can edit whatever you want.

If you have them, you can import traces of a GPS logger to help you add roads and paths and when you are finished, you press a button and the changes get uploaded and will be visible to the public a few minutes later (though one modification I made took about an hour to arrive on the web).

When the same nodes where updated in the meantime, a really nice conflict resolution assistant will help you to resolve the conflicts.

For me personally, this has the potential to become my new after-work time sink as it combines quite many passions of mine:

  • The GPS tracking, importing and painting of maps is pure technology fun.
  • Actually being outside to generate the traces is healthy and also a lot of fun
  • Maps also are a passion of mine. I love to look at maps and I love to compare them to my mental image of the places they are showing.

And besides all that, Open Street Map is complete enough to be of real use. For biking or hiking it even trumps Google Maps by much.

Still, at least near where I live, there are many small issues that can easily be fixed.

As the different editors are really easy to use, fixing these issues is a lot of fun and I’m totally seeing myself cleaning out all small mistakes I come across or even adding stuff that’s missing. After all, this also provides me with a very good reason to visit the places where I grew up to complete some parts.

The whole concept behind being able to update a map by just a couple of mouse clicks is very compelling too as it finally gives us the potential to have really accurate maps in a very timely fashion. For example: Last October, one of the roads near my house closed and just recently the tracks of the Forchbahn were moved a bit.

Just today I added these changes to OpenStreetMap and now OSM is the only publically available map that correctly shows the traffic situation. And all that with 15 minutes of easy but interesting work.

For those interested, my Open Street Map user profile is, of course, pilif.

SMS is dead

BeejiveIM is the first multiprotocol IM application for the iPhone that supports the new background notification features of firmware 3.0. Yesterday I went ahead and bought that application, curious to see how well it would work.

And just now my phone vibrated and on the display, there was an IM message a coworker sent me via Google Talk. The user experience was exactly the same as it would have been with an SMS – well – nearly the same – the phone made a different sound.

So the dream I had many moons ago (6 years – boy – how time flies) has finally come true, with one difference: Whereas back then the MB cost CHF 7, now it’s practically free, considering that I’m unable to actually use up my traffic quota and even then, it’s only CHF 0.10 now.

So let’s keep that in mind and also consider that SMS pricing hasn’t changed in the last six years.

So while IM was 52 times cheaper than SMS back then, now the price advantage ranges from somewhere between 3500 times cheaper and infinity times cheaper.

SMS pricing needs to be looked at. This just cannot be.

PostgreSQL 8.4

Like a clockwork, about one year after the release of PostgreSQL 8.3, the team behind the best database on this world did it again and released PostgreSQL 8.4, the latest and greatest in a long series of awesomeness.

Congratulations to everyone involved and might you have the strength to continue to improve your awesome piece of work.

For me, the hightlights of this new release are

  • parallel restore: I just tried this out and restoring a dump that usually took around 40 minutes (in standard sql/text format) now takes 5 minutes.
  • The improvements to psql usability just make it even clearer that psql isn’t just a command line database tool, but that it’s one of the best interfaces to access the data and administer the server. psql hands-down beats whatever database GUI tool I have seen so far.
  • truncate table reset identity is very useful during development
  • no more max_fsm_pages makes maintaining the database even easier and removes one variable to keep track of.

Thanks again for yet another awesome release.

All-time favourite tools – update

It has been more than four years since I’ve last talked about my all-time favourite tools. I guess it’s time for an update.

Surprisingly, I still stand behind the tools listed there: My love for Exim is still un-changed (it just got bigger lately – but that’s for another post). PostgreSQL is cooler than ever and powers PopScan day-in, day-out without flaws.

Finally, I’m still using InnoSetup for my Windows Setup programs, though that has lost a bit of importance in my daily work as we’re shifting more and more to the web.

Still. There are two more tools I must add to the list:

  • jQuery is a JavaScript helper libary that allows you to interact with the DOM of any webpage, hiding away browser incompatibilities. There are a couple of libraries out there which do the same thing, but only jQuery is such a pleasure to work with: It works flawlessly, provides one of the most beautiful APIs I’ve ever seen in any library and there are tons and tons of self-contained plug-ins out there that help you do whatever you would want to on a web page.
    jQuery is an integral part of making web applications equivalent to their desktop counterparts in matters of user interface fluidity and interactivity.
    All while being such a nice API that I’m actually looking forward to do the UI work – as opposed to the earlier days which can most accurately be described as UI sucks.
  • git is my version control system of choice. There are many of them out there in the world and I’ve tried the majority of them for one thing or another. But only git combines the awesome backwards-compatibility to what I’ve used before and what’s still in use by my coworkers (SVN) with abilities to beautify commits, have feature branches, very high speed of execution and very easy sharing of patches.
    No single day passes without me using git and running into a situation where I’m reminded of the incredible beauty that is git.

In four years, I’ve not seen one more other tool I’ve as consistenly used with as much joy as git and jQuery, so those two certainly have earned their spot in my heart.

Tunnel munin nodes over HTTP

Last time I’ve talked about Munin, the one system monitoring tool I feel working well enough for me to actually bother to work with. Harsh words, I know, but the key to every solution is simplicity. And simple Munin is. Simple, but still powerful enough to do everything I would want it to do.

The one problem I had with it is that the querying of remote nodes works over a custom TCP port (4949) which doesn’t work behind firewalls.

There are some SSH tunneling solutions around, but what do you do if even SSH is no option because the remote access method provided to you relies on some kind of VPN technology or access token.

Even if you could keep a long-running VPN connection, it’s a very performance intensive solution as it requires resources on the VPN gateway. But this point is moot anyways because nearly all VPNs terminate long running connections. If re-establishing the connection requires physical interaction, then you are basically done here.

This is why I have created a neat little solution which tunnels the munin traffic over HTTP. It works with a local proxy server your munin monitoring process will connect to and a little CGI-script on the remote end.

This will cause multiple HTTP connections per query interval (the proxy uses Keep-Alive though so it’s not TCP connections we are talking about – it’s just hits in the access.log you’ll have to filter out somehow) because it’s impossible for a CGI script to keep the connection open and send data both ways – at least not if your server-side is running plain PHP which is the case in the setup I was designing this for.

Aynways – the solution works flawlessly and helps me to monitor a server behind one hell of a firewall and behind a reverse proxy.

You’ll find the code here (on GitHub as usual) and some explanation on how to use it is here.

Licensed under the MIT license as usual.

Monitoring servers with munin

If you want to monitor runtime parameters of your machines, there are quite many tools available.

But in the past, I’ve never been quite happy with any of them. Some didn’t work, others didn’t work right and some others worked ok but then stopped working all of a sudden.

All of them were a pain to install and configure.

Then, a few days ago, I found Munin. Munin is optimized for simplicity, which makes it work very, very well. And the reports actually look nice and readable which is a nice additional benefit.

Apache parameters

Like many other system monitoring solutions, Munin relies on custom plugins to access the various system parameters. Unlike other solutions though, the plugins are very easy to write, understand and debug which encourages you to write your own.

Adding additional servers to be watched is a matter of configuring the node (as in “apt-get install munin-node”) and adding two lines to your master server configuration file.

Adding another plugin for a new parameter to monitor is a matter of creating one symlink and restarting the node.

Manifestation of a misconfigured cronjob

On the first day after deployment the tool already proved useful in finding a misconfigured cronjob on on server which ran every minute for one hour every second hour instead of once per two hours.

Munin may not have all the features of the foll-blown solutions, but it has three real advantages over everything else I’ve seen so far:

  1. It’s very easy to install and configure. What good is an elaboration solution if you can never get it to work correctly?
  2. It looks very nicely and clean. If looking at the reports hurts the eyes, you don’t look at them or you don’t understand what they want to tell you.
  3. Because the architecture is so straight-forward, you can create customized counters in minutes which in the end provides you with a much better overview over what is going on.

The one big drawback is that the master data collector needs to access the monitored servers on port 4949 which is not exactly firewall-friendly.

Next time, we’ll learn how to work around that (and I don’t mean the usual ssh tunnel solution).

Dropbox

Dropbox is cloud storage on the next level: You install their little application – available for Linux, Mac OS X and Windows – which will create a folder which will automatically be kept synchronized between all the computers where you have installed that little application on.

Because it synchronizes in the background and always keeps the local copy around, the access-speed isn’t different from a normal local folder – mainly because it is, after all, a local folder you are accessing. Dropbox is not one of these slow “online hard drives” it’s more like rsync in the background (and rsync it is – the application is intelligent enough to only transmit deltas – even from binary files).

They do provide you with a web interface of course, but the synchronizing aspect is the most interesting.

The synchronized data ends up somewhere in Amazon’s S3 service, which is fine with me.

Unfortunately, while the data stored in an encrypted fashion on S3, the key is generated by the Dropbox server and thus known to them, which makes Dropbox completely unusable for sensitive unencrypted data. They do state in the FAQ that this will maybe change sometime in the future, but for not it is as it is.

Still, I found some use for Dropbox: ~/Library/Preferences, ~/.zshrc and ~/.ssh all are now stored in ~/Dropbox/System and symlinked back to their original place. This means that a large chunk of my user profile is availalbe on all the computers I’m working on. I would even try the same trick with ~/Library/Application Support, but that seems risky due to the missing encryption and due to the fact that Application Support sometimes contains database files which get corrupted for sure when moved around while they are open – like the Firefox profile.

This naturally even works when the internet connection is down – DropBox synchronizes changes locally, so when the internet (or Dropbox) is down, I just have the most recent copy of when the service was still working – that’s more than good enough.

Another use that comes to mind for Dropbox storage are game save files or addons you’d want to have access to on every computer you are using – just move your stuff to ~/Dropbox and symlink it back to the original place.

Very convenient.

Now if only they’d provide me with a way to provide my own encryption key. That way I would instantly buy the pro account with 25GB of storage and move lots and lots of data in there.

Dropbox is the answer to the ever increasing amount of computers in my life because now I don’t care about setting up the same stuff over and over again. It’s just there and ready. Very helpful.

My favourite asterisk feature

I’ve just included this into the context of the dialplan where the calls from and to our internal phones live.

[intercom]
exten => _55[6-8][1-9],1,SIPAddHeader("Call-Info: sip:asterisk;answer-after=0")
exten => _55[6-8][1-9],2,Dial(SIP/${EXTEN:2})

this is as useless as it is fun.

Too bad softphones are getting some real attention in the office lately, as they don’t support the answer-after feature and even if they did, where is the fun of just making yourself heard on the headphones of the victim as opposed to doing that directy on their speaker – loud enough to be heard in the whole office.

VoIP is fun and it’s about time I do more to our asterisk config but just watch it work and never fail.

VMWare Fusion Speed

This may be totally placebo, but I noticed that using Vista inside a VMWare Fusion VM has just turned from nearly unbearable slow to actually quite fast by updating from 2.0 Beta 2 to 2.0 Final.

It may very well be that the beta versions contained additional logging and/or debug code which was keeping the VM from reaching its fullest potential.

So if you are too lazy to upgrade and still running one of the Beta versions, you should consider updating. For me at least, it really brought a nice speed-up.