Apple Watch starting to be useful

Even after the Time for Coffee app has been updated for WatchOS 2.0 support last year and my Apple Watch has become significantly more useful, the fact that the complication didn’t get a chance to update very often and the fact that launching the app took an eternity kind of detracted from the experience.

Which lead to me not really using the watch most of the time. I’m not a watch person. Never was. And while the temptation of playing with a new gadget lead to me wearing it on and off, I was still waiting for the killer feature to come around.

This summer, this has changed a lot.

I’m in the developer program, so I’m running this summer’s beta versions and Apple has also launched Apple Pay here in Switzerland.

So suddenly, by wearing the watch, I get access to a lot of very nice features that present themselves as huge user experience improvements:

  • While «Time for Coffee»’s complication currently is flaky at best, I can easily attribute this to WatchOSes current Beta state. But that doesn’t matter anyways, because the Watch now keeps apps running, so whenever I need public transport departure information and when the complication is flaky, I can just launch the app which now comes up instantly and loads the information within less than a second.
  • Speaking of leaving apps running: The watch can now be configured to revert to the clock face only after more than 8 minutes have passed since the last use. This is perfect for the Bring shopping list app which now suddenly is useful. No more taking the phone out while shopping.
  • Auto-Unlocking the Mac by the presence of an unlocked and worn watch has gone from not working at all, to working rarely, to working most of the time as the beta releases have progressed (and since Beta 4 we also got the explanation that WiFi needs to be enabled on the to-be-unlocked mac, so now it works on all machines). This is very convenient.
  • While most of the banks here in Switzerland boycott Apple Pay (a topic for another blog entry – both the banks and Apple are in the wrong), I did get a Cornèrcard which does work with Apple Pay. Being able to pay contactless with the watch even for amounts larger than CHF 50 (which is the limit for passive cards) is amazing.

Between all these features, I think there’s finally enough justification for me to actually wear the watch. It still happens that I forget to put it on here and then, but overall, this has totally put new life into this gadget, to the point where I’m inclined to say that it’s a totally new and actually very good experience now.

If you were on the fence before, give it a try come next autumn. It’s really great now.

AV Programs as a Security Risk

Imagine you were logged into your machine as an administrator. Imagine you’re going to double-click every single attachment in every single email you get. Imagine you’re going to launch every single file your browser downloads. Imagine you answer affirmative on every single prompt to install the latest whatever. Imagine you unpack every single archive sent to you and you launch ever single file in those archives.

This is the position that AV programs put themselves on your machine if they want to have any chance at being able to actually detect malware. Just checking whether a file contains a known byte signature has stopped being a reliable method for detecting viruses long ago.

It makes sense. If I’m going to re-distribute some well-known piece of malware, all I have to do is to obfuscate it a little bit or encrypt it with a static key and my piece of malware will naturally not match any signature of any existing malware.

The loader-stub might, but if I’m using any of the existing installer packagers, then I don’t look any different than any other setup utility for any other piece of software. No AV vendor can yet affort to black-list all installers.

So the only reliable way to know whether a piece of software is malware or not, is to start running it in order to at least get it to extract/decrypt itself.

So here we are in a position where a anti malware program either is useless placebo or it has to put itself into the position I have started this article with.

Personally, I think it is impossible to safely run a piece of software in a way that it cannot do any harm to the host machine.

AV vendors could certainly try to make it as hard as possible for malware to take over a host machine, but here we are in 2016 where most of the existing AV programs are based on projects started in the 90ies where software quality and correctness was even less of a focus than it is today.

We see AV programs disabling OS security features, installing and starting VNC servers and providing any malicious web site with full shell access to the local machine. Or allow malware to completely take over a machine if a few bytes are read no matter where from.

This doesn’t cover the privacy issues yet which are caused by more and more price-pressure the various AV vendors are subject to. If you have to sell the software too cheap to pay for its development (or even give it away for free), then you need to open other revenue streams.

Being placed in such a privileged position like AV tools are, it’s no wonder what kinds of revenue streams are now in process of being tapped…

AV programs by definition put themselves into an extremely dangerous spot on your machine: In order to read every file your OS wants to access, it has to run with adminitrative rights and in order to actually protect you it has to understand many, many more file formats than what you have applications for on your machine.

AV software has to support every existing archive format, even long obsolete ones because who knows – you might have some application somewhere that can unpack it; it has to support every possibly existing container format and it has to support all kinds of malformed files.

If you try to open a malformed file with some application, then the application has the freedom to crash. An AV program must keep going and try even harder to see into the file to make sure it’s just corrupt and not somehow malicious.

And as stated above: Once it finally got to some executable payload, it often has no chance but to actually execute it, at least partially.

This must be some of the most difficult thing to get right in all of engineering: Being placed at a highly privileged spot and being tasked to then deal with content that’s malicious per definitionem is an incredibly difficult task and when combined with obviously bad security practices (see above), I come to the conclusion that installing AV programs is actually lowering the overall security of your machines.

Given a fully patched OS, installing an AV tool will greatly widen the attack surface as now you’re putting a piece of software on your machine that will try and make sense of every single byte going in and out of your machine, something your normal OS will not do.

AV tools have the choice of doing nothing against any but the most common threats if they decide to do pure signature matching, or of potentially putting your machine at risk.

AV these days might provide a very small bit of additional security against well-known threats (though against those you’re also protected if you apply the latest OS patches and don’t work as an admin) but they open your installation wide for all kinds of targeted attacks or really nasty 0-day exploits that can bring down your network without any user-interaction what so ever.

If asked what to do these days, I would give the strong recommendation to not install AV tools. Keep all the software you’re running up to date and white-list the applications you want your users to run. Make use of white-listing by code-signatures to, say, allow everything by a specific vendor. Or all OS components.

If your users are more tech-savy (like developers or sys admins), don’t whitelist but also don’t install AV on their machines. They are normally good enough to not accidentally run malware and the risk of them screwing up is much lower than the risk of somebody exploiting the latest flaw in your runs-as-admin-and-launches-every-binary piece of security software.

The new AppleTV

When the 2nd generation of the AppleTV came out and offered AirPlay support, I bought one more or less for curiosity value, but it worked so well in conjunction with AirVideo that it has completely replaced my previous attempts at an in-home media center system.

It was silent, never really required OS or application updates, never crashed and never overheated. And thanks to AirVideo it was able to play everthing I could throw at it (at the cost of a server running in the closet of course).

The only inconvenience was the fact that I needed too many devices. Playing a video involved my TV, the AppleTV and my iOS device. Plus remotes for TV and AppleTV. Personally, I didn’t really mind much, but while I would have loved to give my parents access to my media library (1 Gbit/s upstream FTW), the requirement to use three devices and to correctly switch on AirPlay has made this a complete impossibility due to the complexity.

So I patiently awaited the day when the AppleTV would finally be able to run apps itself. There was no technical reason to prevent that – the AppleTV totally was powerful enough for this and it was already running iOS.

You can imagine how happy I was when finally I got what I wanted and the new 4th generation AppleTV was announced. Finally a solution my parents can use. Finally something to help me to ditch the majority of the devices involved.

So of course I bought the new device the moment it became available.

I even had to go through additional trouble due to the lack of the optical digital port (the old AppleTV was connected to a Sonos playbar), but I found an audio extractor that works well enough.

So now after a few weeks of use, the one thing that actually pushed me to write this post here is the fact that the new AppleTV is probably the most unfinished and unpolished product that I have ever bought from Apple. Does it work? Yes. But the list of small oversights and missing pieces is as big as I have never seen in an Apple product. Ever.

Let me give you a list – quite like what I’ve done 12 years ago for a very different device

  • While the AppleTV provides you with an option to touch it with an iOS device for the configuration of the Wifi and the appleid settings, I still had to type in my appleid password twice: Once for the AppStore and once for the game center. Mind you, my appleid password is 30 characters long, containing uppercase, lowercase digits and symbols. Have fun doing this on the on-screen keyboard
  • The UI is laggy. The reason for having to type in the game center password was because the UI was still loading the system appleid as I was pressing the “Press here to login button”. First nothing happened, then the button turned into a “Press here to sign out button” and then the device reacted to my button press. Thank you
  • The old AppleTV supported either the Remote app on an iPhone or even a bluetooth keyboard for character entry. The new one doesn’t support any of this, so there’s really no way around the crappy on-screen keyboard.
  • While the device allows you to turn off automatic app updates, there is no list of apps with pending updates. There’s only “Recently updated”, but that’s a) limited to 20 apps, b) lists all recently updated apps, c) gives no indication what app is updated yet and what isn’t and finally d) isn’t even sorted by date of the last update. This UI is barely acceptable for enabled automatic updates, but completely unusable if you want them disabled to the point that I decided to just bite the bullet and enable them.
  • The sound settings offer “Automatic”, “Stereo” and “Dolby Surround”. Now, “Dolby Surround” is a technology from the mid-90-ies that encodes one additional back-channel in a stereo signal and is definitely not what you want (which would be “Dolby Digital”). Of course I’ve assumed that there’s some “helpfulnes” at work here, detecting the fact that my TV doesn’t support Dolby Digital (but the playbar does, so it’s perfectly fine to send out AC/3 sinal). Only after quite a bit of debugging I found out that what Apple calls “Dolby Surround” is actually “Dolby Digital”. WHY??
  • The remote is way too sensitive. If you so much as lift it up, you’ll start seeking your video (which works way better than anything I’ve seen before, but still…)
  • Until the first update (provided without changelog or anything the like), the youtube app would start to constantly interrupt playback and reload the stream once you paused a video once.
  • Of course in Switzerland, Siri doesn’t work, even though I would totally be able to use it in english (or german – it’s available in Germany after all) Not that it matters because the Swiss store is devoid of media I’d actually be interested in anyways and there’s no way for third-parties to integrate into the consolidated system-wide interface for media browsing.
  • Home Sharing doesn’t work for me. At. All. Even after typing in my Apple ID password a third time (which, yes, it asked me to).
  • It still doesn’t wake up on network access, nor appear in the list of Airplay-able devices of my phone when it’s in sleep mode. This only happens in one segment of my network, so it might be an issue with a switch though – wouldn’t be the first time :/

I’m sure as time goes on we’ll see updates to fix this mess, but I cannot for the life of me understand why Apple thought that the time was ready to release this.

Again: It works fine and I will be brining one to my mother next Friday because I know she’ll be able to use it just fine (especially using the Plex app). But this kind of lack of polish we’re used to on Android and Windows. How can Apple produce something like this?

IPv6 in production

Yesterday, I talked about why we need IPv6 and to make that actually happend, I decided to do my part and make sure that all of our infrastructure is available over IPv6.

Here’s a story of how that went:

First was to request an IPv6 allocation by our hosting provider: Thankfully our contract with them included a /64, but it was never enabled and when I asked for it, they initially tried to bill us CHF 12/mt extra, but after pointing them to the contract, they started to make IPv6 happen.

That this still took them multiple days to do was a pointer to me that they were not ready at all and by asking, I was forcing them into readyness. I think I have done a good deed there.

dns

Before doing anything else, I made sure that our DNS servers are accessible over IPv6 and that IPv6 glue records existed for them.

We’re using PowerDNS, so actually supporting IPv6 connectivity was trivial, though there was a bit of tweaking needed to tell it about what interface to use for outgoing zone transfers.

Creating the glue records for the DNS servers was trivial too – nic.ch has a nice UI to handle the glue records. I’ve already had IPv4 glue records, so all I had to do was to add the V6 addresses.

web properties

Making our web properties available over IPv6 was trivial. All I had to do was to assign an IPv6 address to our frontend load balancer.

I did not change any of the backend network though. That’s still running IPv4 and it will probably for a long time to come as I have already carefully allocated addresses, configured DHCP and I even know IP addresses by heart. No need to change this.

I had to update the web application itself a tiny bit in order to copy with a REMOTE_ADDR that didn’t quite look the same any more though:

  • there were places where we are putting the remote address into the database. Thankfully, we are using PostgreSQL whose native inet type (it even supports handy type specific operators) supports IPv6 since practically forever. If you’re using another database and you’re sotoring the address in a VARCHAR, be prepared to lengthen the columns as IPv6 addreses are much longer.
  • There were some places where we were using CIDR matching for some privileged API calls we are allowing from the internal network. Of course, because I haven’t changed the internal network, no code change was strictly needed, but I have updated the code (and unit tests) to deal with IPv6 too.

The last step was to add the AAAA record for our load balancer.

From that moment on, our web properties were available via IPv6 and while there’s not a lot of traffic from Switzerland, over in Germany, about 30% of all requests are happening over IPv6.

email

Of the bunch, dealing with email was the most complicated step. Not so much for enabling IPv6 support in the MTA as that was supported since forever (we’re using Exim (warning: very old post)).

The difficulty lied in getting everything else to work smoothly though – mostly in regards to SPAM filtering:

  • Many RBLs don’t support IPv6, so I had to make sure we weren’t accidentally treating all mail delivered to us over IPv6 as spam.
  • If you want to have any chance at your mail being accepted by remote parties, then you must have a valid PTR record for your mail server. This meant getting reverse DNS to work right for IPv6.
  • Of course you also need to update the SPF record now that you are sending email over IPv6.

PTR record

The PTR record was actually the crux of the matter.

In IPv4, it’s inpractical or even impossible to get a reverse delegation for anything smaller than a /24, because of the way how reverse lookup works in DNS. There was RFC 2317 but that was just too much hassle for most ISPs to implement.

So the process normally was to let the ISP handle the few PTR records you wanted.

This changes with IPv6 in two ways: As the allocation is mostly fixed to a /64 or larger and because there are so many IPv6 addreses to allow splitting networks at byte boundaries without being stingy, it is trivially easy to do proper reverse delegation to customers.

And because there are so many addresses available for a customer (a /64 allocation is enough addresses to cover 2^32 whole internets), reverse delegation is the only way to make good use of all these addresses.

This is where I hit my next roadblock with the ISP though.

They were not at all set up for proper reverse delegation – the support ticket I have opened in November of 2014 took over 6 months to finally get closed in May of this year.

As an aside: This was a professional colocation provider for business customers that was, in 2014, not prepared to even just hand out IPv6 addresses and who required 6 months to get reverse delegation to work.

My awesome ISP was handing out IPv6 addresses since the late 90ies and they offer reverse delgation for free to anybody who asks. As a matter of fact, it was them to ask me whether I wanted a reverse delegation last year when I signed up with them.

Of course I said yes :-)

This brought me to the paradoxical situation of having a fully working IPv6 setup at home while I had to wait for 6 months for my commercial business ISP to get there.

it’s done now

So after spending about 2 days learning about IPv6, after spending about 2 days updating our application, after spending one day convincing our ISP to give us the IPv6 allocation they promised in the contract and after waiting 6 months for the reverse delegation, I can finally say that all our services are now accessible via IPv6.

Here are the headers of the very first Email we’ve transmitted via IPv6

And here’s the achievement badge I waited so patiently (because of the PTR delegation) to finally earn 🎉

IPv6 Certification Badge for pilif

I can’t wait for the accompanying T-Shirt to arrive 😃

Why we need IPv6

As we are running out of IPv4 network addresses (and yes, we are), there’s only two possible future scenarios and one of the two, most people are not going to like at all.

As IP addresses get more and more scarce, things will start to suck for both clients and content providers.

As more and more clients connect, carrier grade NAT will become the norm. NAT already sucks, but at least you get to control it and using NAT-PMP or UPnP, applications in your network get some control over being able to accept incoming connections.

Carrier Grade NAT is different. That’s NAT being done on the ISPs end, so you don’t get to open ports at all. This will affect gaming performance, it will affect your ability to use VoIP clients and of course file sharing clients.

For content providers on the other hand, it will become more and more difficult to get the public IP addresses needed for them to be able to actually provide content.

Back in the day, if you wanted to launch a service, you would just do it. No need to ask anybody for permission. But in the future, as addresses become scarce and controlled by big ISPs which are also acting as content provider, the ISPs become the gatekeepers for new services.

Either you do something they like you to be doing, or you don’t get an address: As there will be way more content providers fighing over addresses than there will be addresses available, it’s easy for them to be picky.

Old companies who still have addresses of course are not affected, but competing against them will become hard or even impossible.

More power to the ISPs and no competition for existing content providing services both are very good things for players already in the game, so that’s certainly a possible future they are looking forward to.

If we want to prevent this possible future from becoming reality, we need a way out. IPv4 is draining up. IPv6 exists for a long time, but people are reluctant to upgrade their infrastructure.

It’s a vicious cycle: People don’t upgrade their infrastructure to IPv6 because nobody is using IPv6 and nobody is using IPv6 because there’s nothing to be gained from using IPv6.

If we want to keep the internet as an open medium, we need to break the cycle. Everybody needs to work together to provide services over IPv6, to the point of even offering services over IPv6 exclusively.

Only then can we start to build pressure for ISPs to support IPv6 on their end.

If you are a content provider, ask your ISP for IPv6 support and start offering your content over IPv6. If you are an end user, pressure your ISP to offer IPv6 connectivity.

Knowing this, even one year ago, after getting motivated by my awesome ISP who offered IPv6 connectivity ever since, I started to get our commercial infrastructure up to speed.

Read on to learn how that went.

The Future of the JRPG genre

After an underwhelming false start with Xenoblade Chronicles back when the game came out, the re-release on the 3DS made my give it another try and now that I’m nearly through with the game (just beat the 3rd last main quest boss), I feel compelled to write my first game review after many years of non-gaming content here.

«Review» might not be the entirely correct term though as this article is about to explain why I personally believe Xenoblade to be one of the best instances of the JPRG genre and might actually be very high up there in my list of all-time favorite games.

But first, let’s talk about what’s not so good at the game and why I nearly have missed this awesome game: If I had to list the shortcomings in this masterpiece, it would be the UI design of the side-questing system and the very, very slow start of the story.

First the story: After maybe an hour of play time, the player is inclined to think to have been thrown into the usual revenge plot, this time about a fight against machine based life-forms, but a simple revenge-plot none the less. Also, to be honest, it’s not even a really interesting revenge-plot. It feels predictable and not at all like what we’re usually used to from the genre.

Once you reach the half-time mark of the game, the subtle hints that the game’s dropping on you before that start to become less and less subtle, revealing to the player that they got it all wrong.

The mission of the game changes completely to the point of even completely changing whom you are fighting against and turning around many things you’ve taken for granted for the first half.

This is some of the most impressive story-development I’ve seen so far and also came as a complete surprise to me.

So what felt like the biggest shortcoming of the game (lackluster story) suddenly turned into one of its strongest points.

«Other games of the genre also did this» you might think as you compare this to Final Fantasy XII, but where that game unfortunately never really takes off nor adds any bigger plot-twists, the thing that Xenoblade does after the half-time marker is simply mind-blowing to the point of me refusing to post any spoilers even though the game is quite old by now.

So we have a game that gets amazing after 20-40 hours (depending on how you deal with the side-quests). What’s holding us over until then?

The answer to that question is the reason why I think that Xenoblade is one of the best JRPGs so far: What’s holding us over in the first 40 hours of the game is, you know, gameplay.

The battle system feels like it has been lifted from current MMORPGs (I’m mostly referring to World of Warcraft here as that’s the one I know best), though while it has been scaled down in sheer amount of skills, the abilities themselves have been much better balanced between the characters, which of course is possible in a single-player game.

The game’s affinity system also greatly incentivises the player to switch their party around as they play the game. This works really well when you consider the different play styles offered by the various characters. A tank plays differently from DPS which plays differently from the (unfortunately only one) healer.

But even between members of the same class there are differences in play style leading to a huge variety for players.

This is the first JRPG where I’m actually looking forward to combat – it’s that entertaining.

While the combat sometimes can be a bit difficult, especially because randomness still plays a huge part, it’s refreshing to see that the game doesn’t punish you at all for failing: If you die you just respawn at the last waypoint and usually there’s one of these right in front of the boss.

Even better, normally, the fight just starts again, skipping all introductory cutscenes. And even if there still is some cut-scenes not skipped automatically: The game always allows cutscenes to be skipped.

This makes a lot of sense, because combat is actually so much fun that there’s considerable replay-value to the game which gets much enforced by skippable cutscenes, though some of them you would never ever in your life want to skip – they are so good (you know which ones I’m referring to).

Combat is only one half of the gameplay, the other is exploration: The world of the game is huge and for the first time ever in a JPRG, the simple rule of «you can see it, you can go there» applies. For the first time ever, the huge world is yours to explore and to enjoy.

Never have I seen such variety in locations, especially, again, in the second half of the game which I really don’t want to spoil here.

Which brings us to the side-quests: Imagine that you have a quest-log like you’re used to from MMORPGs with about the same style of quests: Find this item, kill these normal mobs, kill that elite mob, talk to that other guy – you know the drill.

The non-unique and somewhat random dialog lines between the characters as they accept these side-quests break the immersion a bit.

But the one big thing that’s really annoying about the side-quests is discoverability: As a player you often have no idea where to go due to the vague quest texts and, worse, many (most) quests are hidden and only become available after you trigger some event or you talk to the correct (seemingly unrelated) NPC.

While I can understand the former issue (vague quest descriptions) from a game-play perspective, the latter is inexcusable, especially as the leveling curve of the game and the affinity system both really are designed around you actually doing these side-quests.

It’s unfair and annoying that playing hide-and seek for hours is basically a fixed requirement to having a chance at beating the game. This feels like a useless prolonging of the existing game for no reason but to, you know, prolong the game.

Thankfully though, by now, the Wiki exists, so whether you’re on the Wii or the 3DS, just have an iPad or Laptop close to you as you do the side-questy parts of the game.

Once you’re willing to live with this issue, then the absolutely amazing gameplay comes into effect again: Because exploration is so much fun, because the battle system is so much fun, then suddenly the side-quests become fun too, once you remove the annoying hide-and-seek aspect.

After all, it’s the perfect excuse to do more of what you enjoy the most: Playing the game.

This is why I strongly believe that this game would have been so much better with a more modern quest-log system: Don’t hide (most of the) quests! Be precise in explaining where to find stuff! You don’t have to artificially prolong the game: Even when you know where to go (I did thanks to the Wiki), there’s still more than 100 hours of entertainment there to be had.

The last thing about quests: Some of the quests require you to find rare items which to get you have a random chance by collecting «item orbs» spread all over the map. This is of course another nice way to encourage exploration.

But I see no reason why the drop rate must be random, especially as respawning the item orbs either requires you to wait 10 to 30 minutes or, saving and reloading the game.

If you want to encourage exploration, hide the orbs! There’s so much content in this game that aritifically prolonging it with annyoing saving and reloading escapades is completely unnecessary.

At least, the amount of grinding required isn’t so bad to the point of being absolutely bearable for me and I have nearly zero patience for grinding.

Don’t get me wrong though: Yes, these artificial time-sinks were annoying (and frankly 100% unneeded), but because the actual gameplay is so much fun, I didn’t really mind them that much.

Finally, there are some technical issues which I don’t really mind that much however: Faces of characters look flat and blurry which is very noticable in the cut-scenes which are all rendered by the engine itself (which is a very good thing).

Especially on the 3DS the low resolution of the game is felt badly (the 3DS is much worse than the Wii to the point of objects sometimes being invisible) and there’s some objects popping into view at times. This is mostly a limitation of the hardware which just doesn’t play well with the huge open world, so I can totally live with it. It only minimally affects my immersion into the game.

If you ask me what is the preferred platform to play this on, I would point at the Wii version though, of course, it’ll be very hard to get the game at this point in time (no. you can’t have my copy).

the good

So after all of this, here’s a list of the unique features of this game it has over all other members of its genre:

  • Huge world that can be explored completely. No narrow hallways but just huge open maps.
  • Absolutely amazing battle system that goes far beyond of the usual «select some action from this text-based menu»
  • Skippable cutscenes which together with the battle system make for a high replayability
  • Many different playable characters with different play styles
  • Great music by the god-like Mr. Mitsuda
  • A very, very interesting story once you reach the mid-point of the game
  • Very believable characters and very good character development
  • Some of the best cutscene direction I have ever seen in my life – again, mostly after the half-time mark (you people who played the game know which particular one I’m talking about – still sends shivers down my spine).

My wishes for the future

The game is nearly perfect in my opinion, but there are two things I think would be great to be fixed in the successor or any other games taking their inspiration from Xenoblade:

First, please fix the quest log and bring it to the current decade of what we’re used to from MMORPGs (where you lifted the quest design off to begin with): Show us where to get the quests, show us where to do them.

Second, and this one is even bigger in my opinion: Please be more considerate in how you represent women in the game. Yes, the most bad-ass characters in the game are women (again, I can’t spoil anything here). Yes, there’s a lot of depth to the characters of women in this game and they are certainly not just there for show but are actually instrumental to the overall story development (again, second part).

But why does most of the equipment for the healer in the game have to be practically underwear? Do you really need to spend CPU resources on (overblown) breast physics when you render everybodies faces blurry and flat?

Wouldn’t it be much better for the story and the immersion if the faces looked better at the cost of some (overblown) jiggling?

Do you really have to constantly show close-ups of way too big breasts of one party member? This is frankly distracting from what is going on in the game.

I don’t care about cultural differences: You managed to design very believable and bad-ass women into your game. Why do you have to diminish this by turning them into a piece of furniture to look at? They absolutely stand on their own with their abilities and their character progression.

It is the year 2015. We can do better than this (though, of course, the world was different in 2010 when the game initially came out).

Conclusion

All of that aside: Because of the amazing game play, because of the mind-blowing story, because of the mind-blowing custscene-direction and because of the huge world that’s all but narrow passages, I love this game more than many others.

I think that this is the first time that the JRPG game really has moved forward in about a decade and I would definitely like to see more games ripping off the good aspects of Xenoblade (well – basically everything).

As such I’m very much looking forward for the games successor to become available here in Europe (it has just come out in Japan and my Japanese still is practically non-existent) and I know for a fact that I’m going to play it a lot, especially as I now know to be patience with the side-quests.

Geek heaven

If I had to make a list of attributes I would like the ISP of my dream to
have, then, I could write quite the list:

  • I would really like to have native IPv6 support. Yes. IPv4 will be sufficient for a very long time, but unless pepole start having access to IPv6, it’ll never see the wide deployment it needs if we want the internet to continue to grow. An internet where addresses are only available to people with a lot of money is not an internet we all want to be subjected to (see my post «asking for permission»)
  • I would want my ISP to accept or even support network neutrality. For this to be possible, the ISP of my dreams would need to be nothing but an ISP so their motivations (provide better service) align with mine (getting better service). ISPs who also sell content have all the motivation to provide crappy Internet service in order to better sell their (higher-margin) content.
  • If I have technical issues, I want to be treated as somebody who obviously has a certain level of technical knowledge. I’m by no means an expert in networking technology, but I do know about powering it off and on again. If I have to say «shibboleet» to get to a real technicial, so be it, but if that’s not needed, that’s even better.
  • The networking technology involved in getting me the connectivity I want should be widely available and thus easily replacable if something breaks.
  • The networking technology involved should be as simple as possible: The more complex the hardware involved, the more stuff can break, especially when you combine cost-pressure for end-users with the need for high complexity.
  • The network equipment I’m installing at my home and which has thus access to my LAN needs to be equipment I own and I control fully. I do not accept leased equipment to which I do not have full access to.
  • And last but not least, I would really like to have as much bandwidth as possible

I’m sure I’m not alone with these wishes, even though, for «normal people» they might seem strange.

But honestly: They just don’t know it, but they too have the same interests. Nobody wants an internet that works like TV where you pay for access to a curated small list of “approved” sites (see network neutrality and IPv6 support).

Nobody wants to get up and reboot their modem here and then because it crashed. Nobody wants to be charged with downloading illegal content because their Wifi equipment was suddenly repurposed as an open access point for other customers of an ISP.

Most of the wishes I list above are the basis needed for these horror scenarios never coming to pass, however unlikely the might seem now (though getting up and rebooting the modem/router is something we already have to deal with today).

So yes. While it’s getting rarer and rarer to get all the points of my list fulfilled, to the point where I though this to be impossible to get all of it, I’m happy to say that here in Switzerland, there is at least one ISP that does all of this and more.

I’m talking about Init7 and especially their awesome FTTH offering Fiber7 which very recently became available in my area.

Let’s deal with the technology aspect first as this really isn’t the important point of this post: What you get from them is pure 1Gbit/s Ethernet. Yes, they do sell you a router box if you want one, but you can just as well just get a simple media converter, or just an SFP module to plug into any (managed) switch (with SFP port).

If you have your own routing equipment, be it a linux router like my shion or be it any Wifi Router, there’s no need to add any kind of additional complexity to your setup.

No additional component that can crash, no software running in your home to which you don’t have your password to and certainly no sneakily opened public WLANs (I’m looking at you, cablecom).

Of course you get native IPv6 (a /48 which incidentally is room for 281474976710656 whole internets in your apartment) too.

But what’s really remarkable about Init7 isn’t the technical aspect (though, again, it’s bloody amazing), but everything else:

  • Init7 was one of the first ISPs in Switzerland to offer IPv6 to end users.
  • Init7 doesn’t just support network neutrality.
    They actively fight for it
  • They explicitly state
    that they are not selling content and they don’t intend to start doing so. They are just an ISP and as such their motivations totally align with mine.

There are a lot of geeky soft factors too:

  • Their press releases are written in Open Office (check the PDF properties
    of this one for example)
  • I got an email from a technical person on their end that was written using
    f’ing Claws Mail on Linux
  • Judging from the Recieved headers of their Email, they are using IPv6 in their internal LAN – down to the desktop workstations. And related to that:
  • The machines in their LAN respond to ICMPv6 pings which is utterly crazy cool. Yes. They are firewalled (cough I had to try. Sorry.), but they let ICMP through. For the not as technical readers here: This is as good an internet citizen as you will ever see and it’s extremely unexpected these days.

If you are a geek like me and if your ideals align with the ones I listed above, there is no question: You have to support them. If you can have their Fiber offering in your area, this is a no-brainer. You can’t get synchronous 1GBit/s for CHF 64ish per month anywhere else and even if you did, it wouldn’t be plain Ethernet either.

If you can’t have their fiber offering, it’s still worth considering their other offers. They do have some DSL based plans which of course are technically inferior to plain ethernet over fiber, but you would still support one of the few remaining pure ISPs.

It doesn’t have to be Init7 either. For all I know there are many others, maybe even here in Switzerland. Init7 is what I decided to go with initially because of the Gbit, but the more I leared about their philosophy, the less important the bandwith got.

We need to support companies like these because companies like these are what ensures that the internet of the future will be as awesome as the internet is today.

why I don’t touch crypto

When doing our work as programmers, we screw up. Small bugs, big bugs, lazyness – the possibilties are endless.

Usually, when we screw up, we know that immediately: We get a failing test, we get an exception logged somewhere, or we hear from our users that such and such feature doesn’t work.

Also, most of the time, no matter how bad the bug, the issue can be worked around and the application keeps working overall.

Once you found the bug, you fix it and everybody is happy.

But imagine you had one of these off-by-one errors in your code (those that constantly happen to all of us) and further imagine that the function where the error was in was still apparently producing the same output as if the error wasn’t there.

Imagine that because of that error the apparently correctly looking output is completely useless and your whole application has just now utterly broken.

That’s crypto for you.

Crypto can’t be a «bit broken». It can’t be «mostly working». Either it’s 100% correct, or you shouldn’t have bothered doing it at all. The weakest link breaks the whole chain.

Worse: looking at the data you are working with doesn’t show any sign of wrongness when you look at it. You encrypt something, you see random data. You decrypt it, you see clear text. Seems to work fine. Right! Right?

Last week’s issue in the random number generator in Cryptocat is a very good example.

The bug was an off-by-one error in their random number generator. The output of the function was still random numbers, looking at the output would clearly show random numbers. Given that fact, the natural bias for seeing code as being correct is only reinforced.

But yet it was wrong. The bug was there and the random numbers weren’t really random (enough).

The weakest link was broken, the whole effort in security practically pointless, which is even worse in this case of an application whose only purpose is, you know, security.

Security wasn’t just an added feature to some other core functionality. It was the core functionality.

That small off-by-one error has completely broken the whole application and was completely unnoticable by just looking at the produced output. Writing a testcase for this would have required complicated thinking and coding which would be as likely to contain an error as it was likely for the code to be tested to contain an error.

This, my friends, is why I keep my hands off crypto. I’m just plain not good enough. Crypto is a world where understanding the concepts, understanding the math and writing tests just isn’t good enough.

The goal you have to reach is perfection. If you fail to reach that, than you have failed utterly.

Crypto is something I leave to others to deal with. Either they have reached perfection at which point they have my utmost respect. Or they fail at which point they have my understanding.

armchair scientists

The place: London. The time: Around 1890.

Imagine a medium sized room, lined with huge shelves filled with dusty books. The lights are dim, the air is heavy with cigar smoke. Outside the last shred of daylight is fading away.

In one corner of the room, you spot two large leather armchairs and a small table. On top of the table, two half-full glasses of Whisky. In each of the armchair an elderly person.

One of them opens the mouth to speak

«If I were in charge down there in South Africa, we’d be so much better off – running a colony just can’t be so hard as they make it out to be»

Conceivably to have happened? Yeah. Very likely actually. Crazy and misguided? Of course – we learned about that in school, imperialism doesn’t work.

Of course that elderly guy in the little story is wrong. The problems are way too complex for a bystander to even understand, let alone solve. More than likely he doesn’t even have a fraction of the background needed to understand the complexities.

And yet he sits there, in his comfortable chair, in the warmth of his club in cozy London and yet he explains that he knows so much better than, you know, the people actually doing the work.

Now think today.

Think about that article you just read that was explaining a problem the author was solving. Or that other article that was illustrating a problem the author is having, still in search of a solution.

Didn’t you feel the urge to go to Hacker News and reply how much you know better and how crazy the original poster must be not to see the obvious simple solution?

Having trouble scaling 4chan? How can that be hard?

Having trouble with your programming environment feeling unable to assign a string to another? Well. It’s just strings, why is that so hard?

Or those idiots at Amazon who can’t even keep their cloud service running? Clearly it can’t be that hard!

See a connection? By stating opinion like that, you are not even a little bit better than the elderly guy in the beginning of this essay.

Until you know all the facts, until you were there, on the ladder holding a hose trying to extinguish the flames, until then, you don’t have the right to assume that you’d do better.

The world we live in is incredibly complicated. Even though computer science might boil down to math, our job is dominated by side-effects and uncontrollable external factors.

Even if you think that you know the big picture, you probably won’t know all the details and without knowing the details, it’s increasingly likely that you don’t understand the big picture either.

Don’t be an armchair scientist.

Be a scientist. Work with people. Encourage them, discuss solutions, propose ideas, ask what obvious fact you missed or was missing in the problem description.

This is 2012, not 1890.

E_NOTICE stays off.

I’m sure you’ve used this idiom a lot when writing JavaScript code

options['a'] = options['a'] || 'foobar';

It’s short, it’s concise and it’s clear what it does. In ruby, you can even be more concise:

params[:a] ||= 'foobar'

So you can imagine that I was happy with PHP 5.3’s new ?: operator:

<?php $options['a'] = $options['a'] ?: 'foobar';

In all three cases, the syntax is concise and readable, though arguably, the PHP one could read a bit better, but, ?: still is better than writing the full ternary expression, spelling out $options['a'] three times.

PopScan, since forever (forever being 2004) runs with E_NOTICE turned off. Back in the times, I felt it provided just baggage and I just wanted (had to) get things done quickly.

This, of course, lead to people not taking enough care for the code and recently, I had one too many case of a bug caused by accessing a variable that was undefined in a specific code path.

I decided that I’m willing to spend the effort in cleaning all of this up and making sure that there are no undeclared fields and variables in all of PopScans codebase.

Which turned out to be quite a bit of work as a lot of code is apparently happily relying on the default null that you can read out of undefined variables. Those instances might be ugly, but they are by no means bugs.

Cases where the null wouldn’t be expected are the ones I care about, but I don’t even what to go and discern the two – I’ll just fix all of the instances (embarrassingly many, most of them, thankfully, not mine).

Of course, if I put hours into a cleanup project like this, I want to be sure that nobody destroys my work again over time.

Which is why I was looking into running PHP with E_NOTICE in development mode at least.

Which brings us back to the introduction.

<?php $options['a'] = $options['a'] ?: 'foobar';

is wrong code. Any accessing of an undefined index of an array always raises a notice. It’s not like Python where you can chose (accessing a dictionary using [] will throw a KeyError, but there’s get() which just returns None). No. You don’t get to chose. You only get to add boilerplate:

<?php $options['a'] = isset($options['a']) ? $options['a'] : 'foobar';

See how I’m now spelling $options['a'] three times again? ?: just got a whole lot less useful.

But not only that. Let’s say you have code like this:

<?php
list($host, $port) = explode(':', trim($def))
$port = $port ?: 11211;

IMHO very readable and clear what it does: It extracts a host and a port and sets the port to 11211 if there’s none in the initial string.

This of course won’t work with E_NOTICE enabled. You either lose the very concise list() syntax, or you do – ugh – this:

<?php
list($host, $port) = explode(':', trim($def)) + array(null, null);
$port = $port ?: 11211;

Which looks ugly as hell. And no, you can’t write a wrapper to explode() which always returns an array big enough, because you don’t know what’s big enough. You would have to pass the amount of nulls you want into the call too. That would look nicer then above hack, but it still doesn’t even come close in conciseness to the solution which throws a notice.

So. In the end, I’m just complaining about syntax you might think? I though so too and I wanted to add the syntax I liked, so I did a bit of experimenting.

Here’s a little something I’ve come up with:

<?php
define('IT', 100000);
error_reporting(E_ALL);
$g = array('a' => 'b');
$a = '';
function _(&$in, $k=null){
if (!isset($in)) return null;
if (is_array($in)){
return isset($k) ? ( isset($in[$k]) ? $in[$k] : null) : new WrappedArray($in);
}
return $in ?: null;
}
class WrappedArray{
private $v;
function __construct(&$v){
$this->v = $v;
}
function __get($key){
return isset($this->v[$key]) ? $this->v[$key] : null;
}
function __set($key, $value){
$this->v[$key] = $value;
}
}
error_reporting(E_ALL ^ E_NOTICE);
$start = microtime(true);
for($i = 0; $i < IT; $i++){
$a = $g['gnegg'];
}
$end = microtime(true);
printf("Notices off. Array %d iterations took %.6fs\n", IT, $end-$start);
$start = microtime(true);
for($i = 0; $i < IT; $i++){
$a = isset($g['gnegg']) ? $g['gnegg'] : null;
}
$end = microtime(true);
printf("Notices off. Inline. Array %d iterations took %.6fs\n", IT, $end-$start);
$start = microtime(true);
for($i = 0; $i < IT; $i++){
$a .= $blupp;
}
$end = microtime(true);
$a = "";
printf("Notices off. Var. Array %d iterations took %.6fs\n", IT, $end-$start);
error_reporting(E_ALL);
$start = microtime(true);
for($i = 0; $i < IT; $i++){
@$a = $g['gnegg'];
}
$end = microtime(true);
printf("Notices on. @-operator. %d iterations took %.6fs\n", IT, $end-$start);
$start = microtime(true);
for($i = 0; $i < IT; $i++){
@$a .= $blupp;
}
$end = microtime(true);
printf("Notices on. Var. @-operator. %d iterations took %.6fs\n", IT, $end-$start);
$start = microtime(true);
for($i = 0; $i < IT; $i++){
$a = _($g)->gnegg;
}
$end = microtime(true);
printf("Wrapped array. %d iterations took %.6fs\n", IT, $end-$start);
$start = microtime(true);
for($i = 0; $i < IT; $i++){
$a = _($g, 'gnegg');
}
$end = microtime(true);
printf("Parameter call. %d iterations took %.6fs\n", IT, $end-$start);
$start = microtime(true);
for($i = 0; $i < IT; $i++){
$a .= _($blupp);
}
$end = microtime(true);
$a="";
printf("Undefined var. %d iterations took %.6fs\n", IT, $end-$start);

The wrapped array solution looks really compelling syntax-wise and I could totally see myself using this and even forcing everybody else to go there. But of course, I didn’t trust PHP’s interpreter and thus benchmarked the thing.

pilif@tali ~ % php e_notice_stays_off.php
Notices off. Array 100000 iterations took 0.118751s
Notices off. Inline. Array 100000 iterations took 0.044247s
Notices off. Var. Array 100000 iterations took 0.118603s
Wrapped array. 100000 iterations took 0.962119s
Parameter call. 100000 iterations took 0.406003s
Undefined var. 100000 iterations took 0.194525s

So. Using nice syntactic sugar costs 7 times the performance. The second best solution? Still 4 times. Out of the question. Yes. It could be seen as a micro-optimization, but 100’000 iterations, while a lot is not that many. Waiting nearly a second instead of 0.1 second is crazy, especially for a common operation like this.

Interestingly, the most bloated code (that checks with isset()) is twice as fast as the most readable (just assign). Likely, the notice gets fired regardless of error_reporting() and then just ignored later on.

What really pisses me off about this is the fact that everywhere else PHP doesn’t give a damn. ‘0’ is equal to 0. Heck, even ‘abc’ is equal to 0. It even fails silently many times.

But in a case like this, where there is even newly added nice and concise syntax, it has to be overly conservative. And there’s no way to get to the needed solution but to either write too expensive wrappers or ugly boilerplate.

Dynamic languages give us a very useful tool to be dynamic in the APIs we write. We can create functions that take a dictionary (an array in PHP) of options. We can extend our objects at runtime by just adding a property. And with PHP’s (way too) lenient data conversion rules, we can even do math with user supplied string data.

But can we read data from $_GET without boilerplate? No. Not in PHP. Can we use a dictionary of optional parameters? Not in PHP. PHP would require boilerplate.

If a language basically mandates retyping the same expression three times, then, IMHO, something is broken. And if all the workarounds are either crappy to read or have very bad runtime properties, then something is terribly broken.

So, I decided to just fix the problem (undefined variable access) but leave E_NOTICE where it is (off). There’s always git blame and I’ll make sure I will get a beer every time somebody lets another undefined variable slip in.