Why I recommend against JWT

Json Web Tokens are all the rage lately. They are lauded as being a stateless alternative to server-side cookies and as the perfect way to use authentication in your single-page app and some people also sell them as a work around for the EU cookie policy because, you know, they work without cookies too.

If you ask me though, I would always recommend against the use of JWT to solve your problem.

Let me give you a few arguments to debunk, from worse to better:

Debunking arguments

It requires no cookies

General “best” practice stores JWT in the browsers local storage and then sends that off to the server in all authenticated API calls.

This is no different from a traditional cookie with the exception that transmission to the server isn’t done automatically by the browsers (which a cookie would be) and that it is significantly less secure than a cookie: As there is no way to set a value in local storage outside of JavaScript, there consequently is no feature equivalent to cookies’ httponly. This means that XSS vulnerabilities in your frontend now give an attacker access to the JWT token.

Worse, as people often use JWT for both a short-lived and a refresh token, this means that any XSS vulnerability now gives the attacker to a valid refresh token that can be used to create new session tokens at-will, even when your session has expired, in the process completely invalidating all the benefits of having separate refresh and access tokens.

“But at least I don’t need to display one of those EU cookie warnings” I hear you say. But did you know that the warning is only required for tracking cookies? Cookies that are required for the operation of your site (so a traditional session cookie) don’t require you to put up that warning in the first place.

It’s stateless

This is another often used argument in favour of JWT: Because the server can put all the required state into them, there’s no need to store any thing on the server end, so you can load-balance incoming requests to whatever app server you want and you don’t need any central store for session state.

In general, that’s true, but it becomes an issue once you need to revoke or refresh tokens.

JWT is often used in conjunction with OAuth where the server issues a relatively short-lived access token and a longer-lived refresh token.

If a client wants to refresh its access token, it’s using its refresh token to do so. The server will validate that and then hand out a new access token.

But for security reasons, you don’t want that refresh token to be re-used (otherwise, a leaked refresh token could be used to gain access to the site for its whole validity period) and you probably also want to invalidate the previously used access token otherwise, if that has leaked, it could be used until its expiration date even though the legitimate client has already refreshed it.

So you need a means to black-list tokens.

Which means you’re back at keeping track of state because that’s the only way to do this. Either you black-list the whole binary representation of the token, or you put some unique ID in the token and then blacklist that (and compare after decoding the token), but what ever you do, you still need to keep track of that shared state.

And once you’re doing that, you lose all the perceived advantages of statelessness.

Worse: Because the server has to invalidate and blacklist both access and refresh token when a refresh happens, a connection failure during a refresh can leave a client without a valid token, forcing users to log in again.

In todays world of mostly mobile clients using the mobile phone network, this happens more often than you’d think. Especially as your access tokens should be relatively short-lived.

It’s better than rolling your own crypto

In general, yes, I agree with that argument. Anything is better than rolling your own crypto. But are you sure your library of choice has implemented the signature check and decryption correctly? Are you keeping up to date with security flaws in your library of choice (or its dependencies).

You know what is still better than using existing crypto? Using no crypto what so ever. If all you hand out to the client to keep is a completely random token and all you do is look up the data assigned to that token, then there’s no crypto anybody could get wrong.

A solution in search of a problem

So once all good arguments in favour of JWT have dissolved, you’re left with all their disadvantages:

  • By default, the JWT spec allows for insecure algorithms and key sizes. It’s up to you to chose safe parameters for your application
  • Doing JWT means you’re doing crypto and you’re decrypting potentially hostile data. Are you up to this additional complexity compared to a single primary key lookup?
  • JWTs contain quite a bit of metadata and other bookkeeping information. Transmitting this for every request is more expensive than just transmitting a single ID.
  • It’s brittle: Your application has to make sure to never make a request to the server without the token present. Every AJAX request your frontend makes needs to manually append the token and as the server has to blacklist both access and refresh tokens whenever they are used, you might accidentally end up without a valid token when the connection fails during refresh.

So are they really useless?

Even despite all these negative arguments, I think that JWT are great for one specific purpose and that’s authentication between different services in the backend if the various services can’t trust each other.

In such a case, you can use very short-lived tokens (with a lifetime measured in seconds at most) and you never have them leave your internal network. All the clients ever see is a traditional session-cookie (in case of a browser-based frontend) or a traditional OAuth access token.

This session cookie or access token is checked by frontend servers (which, yes, have to have access to some shared state, but this isn’t an unsolvable issue) which then issue the required short-lived JW tokens to talk to the various backend services.

Or you use them when you have two loosely coupled backend services who trust each other and need to talk to each other. There too, you can issue short-lived tokens (given you are aware of above described security issues).

In the case of short-lived tokens that never go to the user, you circumvent most of the issues outlined above: They can be truly stateless because thank to their short lifetime, you don’t ever need to blacklist them and they can be stored in a location that’s not exposed to possible XSS attacks against your frontend.

This just leaves the issue of the difficult-to-get-right crypto, but as you never accept tokens from untrusted sources, a whole class of possible attacks becomes impossible, so you might even get away with not updating on an too-regular basis.

So, please, when you are writing your next web API that uses any kind of authentication and you ask yourself “should I use JWT for this”, resist the temptation. Using plain opaque tokens is always better when you talk to an untrusted frontend.

Only when you are working on scaling our your application and splitting it out into multiple disconnected microservices and you need a way to pass credentials between them, then by all means go ahead and investigate JWT – it’ll surely be better than cobbling something up for yourself.

Technology driven life changes

Last year, when I talked about finally seeing the Apple Watch becoming mildly useful, I had no idea what kind of a ride I was going to be on.

Generally, I’m not really concerned about my health nor fitness, but last September, when my wonderful girlfriend left for a year of study in England, I decided that I finally had enough and I wanted to lose weight.

Having a year of near-zero social obligations would totally allow me to adjust my life-style in a way that’s conducive to weight loss, so here’s what I started doing:

  • During weekdays, I greatly reduced my calorie intake to basically just a salad and a piece of bread every day (you can pry my bread from my cold dead hands – it’s the one food I think I like the most).
  • Every day, no matter the weather, no matter the workload, no matter what, I was going to walk home after my work-day in the office, or on weekends, I would just take an equivalent walk.
  • Every day, I wanted to fill the “Activity” and the “Exercise” rings on my apple watch.

Now walking home sounds like nothing special, but I’m privileged to live in Zürich Switzerland, which means that I have very easy access to forests to walk in.

So commuting home by foot meant that I could walk at least 8 kilometres (4.9 miles), climbing 330m (1082 feet), most of it through the forest.

Every day, no matter whether it was way too hot, way too cold, whether it was raining, hailing or snowing, I would walk home. And every day I would be using my Apple Watch to track what I would generously call a “Workout” (even though it was just walking – but if you go from zero sports to that, I guess it’s ok to call it that).

From September to December I started gradually increasing the distance I walk.

This is the other great thing about Zürich: Once you reach the forest (which you do by walking 20 minutes in practically any direction), you can stay in the forest for hours and hours.

First I extended the 8km walk to 10km, then 12, then 14 and finally 19 (11 miles).

During that time, I kept tracking all the vital signs I could track between the Apple Watch and a Withings scale I bought 1-2 months into this.

My walks got faster and my heart rate at rest got lower and lower, from 80 to now 60.

Every evening after the walk, I would look at the achievements handed out by my Watch which is also why I’ve never updated my movement goals in the Activity app because getting all these badges, honestly, was a lot of fun and very motivational. Every evening I would get notified of increasing my movement streak, of doubling or even tripling my movement goal and of tripling or quadrupling my exercise goal.

screenshot of the activity app

Every morning I would weigh myself and bask in the glory of the ever falling graph painted by the (back then very good) iOS app that came with the scale. I would manage to lose a very consistent 2kg (4.4lbs) per week.

Every walk I would have the chance to experience some of natures beauty.

Crazy sunsets

sunset

Beautiful sunrises

sunrise

Enchanted forests

wintery forest

And frozen creeks

frozen creek

And in spring I could watch trees grow.

When I got home after up to three hours of walk, I was dead tired at around 10pm, meaning that for the first time in ages I would get more than enough sleep and I would still be able to get up between 6 and 7.

By mid of March, after 6 months of a very strict diet and walking home every day, I was done. I had lost 40kg (88.1 lbs).

Now the challenge shifted from losing weight to not gaining weight. I decided to make the diet less strict but also continue with my walks, though I would not do the regular 19km ones any more as they would just take too long (3 hours).

But by June, I really started to notice a change: I wouldn’t feel these walks at all any more. No sweat, no reasonable change in heart rate while on them, no tiredness. The walks really felt like a waste of time.

So I started running.

I never liked running. I was always bad at it. All the way through school where I was the slowest and always felt really bad afterward, through my life until now where I just never did it. Running felt bad and I hated it.

But now things were different.

The first time I changed from walking to running, I did so after reaching the peak altitude, so it was mostly straight and a little bit downhill. But still: I ran 4km (2.48 miles) and when I got home I didn’t feel much more tired.

I was very surprised because running 4km through all of my life would have been completely unthinkable to me, but there I was. I just did.

So next day, I decided to run most of the way, just skipping the steepest parts. Suddenly, there I was, running 8km (4.9 miles), still not feeling particularly tired afterwards.

So I started tracking these runs (using both Runkeeper and Strava for technical reasons – but that’s another post), seeing improvement in my time all the way through July.

And then, on August 1st, I ran half a Marathon climbing 612m (2007 feet)

Screen Shot 2017-08-04 at 11.40.28.png

Considering that this was my first, it’s not even in too bad a time and what’s even more fun to me: I didn’t even feel too tired afterwards and I totally felt like I could run even farther.

So I guess after taking it very slowly and moving from walking a bit to walking more to walking a lot to running a bit to running some more, even I, the most unathletic person possible can push myself into shape.

But what’s the most interesting aspect in all of this is that without technology, the Apple Watch in particular, without the cheesy achievements, none of this would ever have been possible. I hated sports and I’m honestly still not really interested. But the prospect of getting awarded some stupid batches every day is what finally pushed me.

And now, in only a single month, my girlfriend will finally return to Switzerland and I guess she’ll find me in better shape than she’s ever seen me before in our lives. I hope that the prospect of collecting some more batches from my watch will keep me going even when the social pressure might want to tempt me into skipping a workout.

Apple Watch starting to be useful

Even after the Time for Coffee app has been updated for WatchOS 2.0 support last year and my Apple Watch has become significantly more useful, the fact that the complication didn’t get a chance to update very often and the fact that launching the app took an eternity kind of detracted from the experience.

Which lead to me not really using the watch most of the time. I’m not a watch person. Never was. And while the temptation of playing with a new gadget lead to me wearing it on and off, I was still waiting for the killer feature to come around.

This summer, this has changed a lot.

I’m in the developer program, so I’m running this summer’s beta versions and Apple has also launched Apple Pay here in Switzerland.

So suddenly, by wearing the watch, I get access to a lot of very nice features that present themselves as huge user experience improvements:

  • While «Time for Coffee»’s complication currently is flaky at best, I can easily attribute this to WatchOSes current Beta state. But that doesn’t matter anyways, because the Watch now keeps apps running, so whenever I need public transport departure information and when the complication is flaky, I can just launch the app which now comes up instantly and loads the information within less than a second.
  • Speaking of leaving apps running: The watch can now be configured to revert to the clock face only after more than 8 minutes have passed since the last use. This is perfect for the Bring shopping list app which now suddenly is useful. No more taking the phone out while shopping.
  • Auto-Unlocking the Mac by the presence of an unlocked and worn watch has gone from not working at all, to working rarely, to working most of the time as the beta releases have progressed (and since Beta 4 we also got the explanation that WiFi needs to be enabled on the to-be-unlocked mac, so now it works on all machines). This is very convenient.
  • While most of the banks here in Switzerland boycott Apple Pay (a topic for another blog entry – both the banks and Apple are in the wrong), I did get a Cornèrcard which does work with Apple Pay. Being able to pay contactless with the watch even for amounts larger than CHF 50 (which is the limit for passive cards) is amazing.

Between all these features, I think there’s finally enough justification for me to actually wear the watch. It still happens that I forget to put it on here and then, but overall, this has totally put new life into this gadget, to the point where I’m inclined to say that it’s a totally new and actually very good experience now.

If you were on the fence before, give it a try come next autumn. It’s really great now.

AV Programs as a Security Risk

Imagine you were logged into your machine as an administrator. Imagine you’re going to double-click every single attachment in every single email you get. Imagine you’re going to launch every single file your browser downloads. Imagine you answer affirmative on every single prompt to install the latest whatever. Imagine you unpack every single archive sent to you and you launch ever single file in those archives.

This is the position that AV programs put themselves on your machine if they want to have any chance at being able to actually detect malware. Just checking whether a file contains a known byte signature has stopped being a reliable method for detecting viruses long ago.

It makes sense. If I’m going to re-distribute some well-known piece of malware, all I have to do is to obfuscate it a little bit or encrypt it with a static key and my piece of malware will naturally not match any signature of any existing malware.

The loader-stub might, but if I’m using any of the existing installer packagers, then I don’t look any different than any other setup utility for any other piece of software. No AV vendor can yet affort to black-list all installers.

So the only reliable way to know whether a piece of software is malware or not, is to start running it in order to at least get it to extract/decrypt itself.

So here we are in a position where a anti malware program either is useless placebo or it has to put itself into the position I have started this article with.

Personally, I think it is impossible to safely run a piece of software in a way that it cannot do any harm to the host machine.

AV vendors could certainly try to make it as hard as possible for malware to take over a host machine, but here we are in 2016 where most of the existing AV programs are based on projects started in the 90ies where software quality and correctness was even less of a focus than it is today.

We see AV programs disabling OS security features, installing and starting VNC servers and providing any malicious web site with full shell access to the local machine. Or allow malware to completely take over a machine if a few bytes are read no matter where from.

This doesn’t cover the privacy issues yet which are caused by more and more price-pressure the various AV vendors are subject to. If you have to sell the software too cheap to pay for its development (or even give it away for free), then you need to open other revenue streams.

Being placed in such a privileged position like AV tools are, it’s no wonder what kinds of revenue streams are now in process of being tapped…

AV programs by definition put themselves into an extremely dangerous spot on your machine: In order to read every file your OS wants to access, it has to run with adminitrative rights and in order to actually protect you it has to understand many, many more file formats than what you have applications for on your machine.

AV software has to support every existing archive format, even long obsolete ones because who knows – you might have some application somewhere that can unpack it; it has to support every possibly existing container format and it has to support all kinds of malformed files.

If you try to open a malformed file with some application, then the application has the freedom to crash. An AV program must keep going and try even harder to see into the file to make sure it’s just corrupt and not somehow malicious.

And as stated above: Once it finally got to some executable payload, it often has no chance but to actually execute it, at least partially.

This must be some of the most difficult thing to get right in all of engineering: Being placed at a highly privileged spot and being tasked to then deal with content that’s malicious per definitionem is an incredibly difficult task and when combined with obviously bad security practices (see above), I come to the conclusion that installing AV programs is actually lowering the overall security of your machines.

Given a fully patched OS, installing an AV tool will greatly widen the attack surface as now you’re putting a piece of software on your machine that will try and make sense of every single byte going in and out of your machine, something your normal OS will not do.

AV tools have the choice of doing nothing against any but the most common threats if they decide to do pure signature matching, or of potentially putting your machine at risk.

AV these days might provide a very small bit of additional security against well-known threats (though against those you’re also protected if you apply the latest OS patches and don’t work as an admin) but they open your installation wide for all kinds of targeted attacks or really nasty 0-day exploits that can bring down your network without any user-interaction what so ever.

If asked what to do these days, I would give the strong recommendation to not install AV tools. Keep all the software you’re running up to date and white-list the applications you want your users to run. Make use of white-listing by code-signatures to, say, allow everything by a specific vendor. Or all OS components.

If your users are more tech-savy (like developers or sys admins), don’t whitelist but also don’t install AV on their machines. They are normally good enough to not accidentally run malware and the risk of them screwing up is much lower than the risk of somebody exploiting the latest flaw in your runs-as-admin-and-launches-every-binary piece of security software.

The new AppleTV

When the 2nd generation of the AppleTV came out and offered AirPlay support, I bought one more or less for curiosity value, but it worked so well in conjunction with AirVideo that it has completely replaced my previous attempts at an in-home media center system.

It was silent, never really required OS or application updates, never crashed and never overheated. And thanks to AirVideo it was able to play everthing I could throw at it (at the cost of a server running in the closet of course).

The only inconvenience was the fact that I needed too many devices. Playing a video involved my TV, the AppleTV and my iOS device. Plus remotes for TV and AppleTV. Personally, I didn’t really mind much, but while I would have loved to give my parents access to my media library (1 Gbit/s upstream FTW), the requirement to use three devices and to correctly switch on AirPlay has made this a complete impossibility due to the complexity.

So I patiently awaited the day when the AppleTV would finally be able to run apps itself. There was no technical reason to prevent that – the AppleTV totally was powerful enough for this and it was already running iOS.

You can imagine how happy I was when finally I got what I wanted and the new 4th generation AppleTV was announced. Finally a solution my parents can use. Finally something to help me to ditch the majority of the devices involved.

So of course I bought the new device the moment it became available.

I even had to go through additional trouble due to the lack of the optical digital port (the old AppleTV was connected to a Sonos playbar), but I found an audio extractor that works well enough.

So now after a few weeks of use, the one thing that actually pushed me to write this post here is the fact that the new AppleTV is probably the most unfinished and unpolished product that I have ever bought from Apple. Does it work? Yes. But the list of small oversights and missing pieces is as big as I have never seen in an Apple product. Ever.

Let me give you a list – quite like what I’ve done 12 years ago for a very different device

  • While the AppleTV provides you with an option to touch it with an iOS device for the configuration of the Wifi and the appleid settings, I still had to type in my appleid password twice: Once for the AppStore and once for the game center. Mind you, my appleid password is 30 characters long, containing uppercase, lowercase digits and symbols. Have fun doing this on the on-screen keyboard
  • The UI is laggy. The reason for having to type in the game center password was because the UI was still loading the system appleid as I was pressing the “Press here to login button”. First nothing happened, then the button turned into a “Press here to sign out button” and then the device reacted to my button press. Thank you
  • The old AppleTV supported either the Remote app on an iPhone or even a bluetooth keyboard for character entry. The new one doesn’t support any of this, so there’s really no way around the crappy on-screen keyboard.
  • While the device allows you to turn off automatic app updates, there is no list of apps with pending updates. There’s only “Recently updated”, but that’s a) limited to 20 apps, b) lists all recently updated apps, c) gives no indication what app is updated yet and what isn’t and finally d) isn’t even sorted by date of the last update. This UI is barely acceptable for enabled automatic updates, but completely unusable if you want them disabled to the point that I decided to just bite the bullet and enable them.
  • The sound settings offer “Automatic”, “Stereo” and “Dolby Surround”. Now, “Dolby Surround” is a technology from the mid-90-ies that encodes one additional back-channel in a stereo signal and is definitely not what you want (which would be “Dolby Digital”). Of course I’ve assumed that there’s some “helpfulnes” at work here, detecting the fact that my TV doesn’t support Dolby Digital (but the playbar does, so it’s perfectly fine to send out AC/3 sinal). Only after quite a bit of debugging I found out that what Apple calls “Dolby Surround” is actually “Dolby Digital”. WHY??
  • The remote is way too sensitive. If you so much as lift it up, you’ll start seeking your video (which works way better than anything I’ve seen before, but still…)
  • Until the first update (provided without changelog or anything the like), the youtube app would start to constantly interrupt playback and reload the stream once you paused a video once.
  • Of course in Switzerland, Siri doesn’t work, even though I would totally be able to use it in english (or german – it’s available in Germany after all) Not that it matters because the Swiss store is devoid of media I’d actually be interested in anyways and there’s no way for third-parties to integrate into the consolidated system-wide interface for media browsing.
  • Home Sharing doesn’t work for me. At. All. Even after typing in my Apple ID password a third time (which, yes, it asked me to).
  • It still doesn’t wake up on network access, nor appear in the list of Airplay-able devices of my phone when it’s in sleep mode. This only happens in one segment of my network, so it might be an issue with a switch though – wouldn’t be the first time :/

I’m sure as time goes on we’ll see updates to fix this mess, but I cannot for the life of me understand why Apple thought that the time was ready to release this.

Again: It works fine and I will be brining one to my mother next Friday because I know she’ll be able to use it just fine (especially using the Plex app). But this kind of lack of polish we’re used to on Android and Windows. How can Apple produce something like this?

IPv6 in production

Yesterday, I talked about why we need IPv6 and to make that actually happend, I decided to do my part and make sure that all of our infrastructure is available over IPv6.

Here’s a story of how that went:

First was to request an IPv6 allocation by our hosting provider: Thankfully our contract with them included a /64, but it was never enabled and when I asked for it, they initially tried to bill us CHF 12/mt extra, but after pointing them to the contract, they started to make IPv6 happen.

That this still took them multiple days to do was a pointer to me that they were not ready at all and by asking, I was forcing them into readyness. I think I have done a good deed there.

dns

Before doing anything else, I made sure that our DNS servers are accessible over IPv6 and that IPv6 glue records existed for them.

We’re using PowerDNS, so actually supporting IPv6 connectivity was trivial, though there was a bit of tweaking needed to tell it about what interface to use for outgoing zone transfers.

Creating the glue records for the DNS servers was trivial too – nic.ch has a nice UI to handle the glue records. I’ve already had IPv4 glue records, so all I had to do was to add the V6 addresses.

web properties

Making our web properties available over IPv6 was trivial. All I had to do was to assign an IPv6 address to our frontend load balancer.

I did not change any of the backend network though. That’s still running IPv4 and it will probably for a long time to come as I have already carefully allocated addresses, configured DHCP and I even know IP addresses by heart. No need to change this.

I had to update the web application itself a tiny bit in order to copy with a REMOTE_ADDR that didn’t quite look the same any more though:

  • there were places where we are putting the remote address into the database. Thankfully, we are using PostgreSQL whose native inet type (it even supports handy type specific operators) supports IPv6 since practically forever. If you’re using another database and you’re sotoring the address in a VARCHAR, be prepared to lengthen the columns as IPv6 addreses are much longer.
  • There were some places where we were using CIDR matching for some privileged API calls we are allowing from the internal network. Of course, because I haven’t changed the internal network, no code change was strictly needed, but I have updated the code (and unit tests) to deal with IPv6 too.

The last step was to add the AAAA record for our load balancer.

From that moment on, our web properties were available via IPv6 and while there’s not a lot of traffic from Switzerland, over in Germany, about 30% of all requests are happening over IPv6.

email

Of the bunch, dealing with email was the most complicated step. Not so much for enabling IPv6 support in the MTA as that was supported since forever (we’re using Exim (warning: very old post)).

The difficulty lied in getting everything else to work smoothly though – mostly in regards to SPAM filtering:

  • Many RBLs don’t support IPv6, so I had to make sure we weren’t accidentally treating all mail delivered to us over IPv6 as spam.
  • If you want to have any chance at your mail being accepted by remote parties, then you must have a valid PTR record for your mail server. This meant getting reverse DNS to work right for IPv6.
  • Of course you also need to update the SPF record now that you are sending email over IPv6.

PTR record

The PTR record was actually the crux of the matter.

In IPv4, it’s inpractical or even impossible to get a reverse delegation for anything smaller than a /24, because of the way how reverse lookup works in DNS. There was RFC 2317 but that was just too much hassle for most ISPs to implement.

So the process normally was to let the ISP handle the few PTR records you wanted.

This changes with IPv6 in two ways: As the allocation is mostly fixed to a /64 or larger and because there are so many IPv6 addreses to allow splitting networks at byte boundaries without being stingy, it is trivially easy to do proper reverse delegation to customers.

And because there are so many addresses available for a customer (a /64 allocation is enough addresses to cover 2^32 whole internets), reverse delegation is the only way to make good use of all these addresses.

This is where I hit my next roadblock with the ISP though.

They were not at all set up for proper reverse delegation – the support ticket I have opened in November of 2014 took over 6 months to finally get closed in May of this year.

As an aside: This was a professional colocation provider for business customers that was, in 2014, not prepared to even just hand out IPv6 addresses and who required 6 months to get reverse delegation to work.

My awesome ISP was handing out IPv6 addresses since the late 90ies and they offer reverse delgation for free to anybody who asks. As a matter of fact, it was them to ask me whether I wanted a reverse delegation last year when I signed up with them.

Of course I said yes :-)

This brought me to the paradoxical situation of having a fully working IPv6 setup at home while I had to wait for 6 months for my commercial business ISP to get there.

it’s done now

So after spending about 2 days learning about IPv6, after spending about 2 days updating our application, after spending one day convincing our ISP to give us the IPv6 allocation they promised in the contract and after waiting 6 months for the reverse delegation, I can finally say that all our services are now accessible via IPv6.

Here are the headers of the very first Email we’ve transmitted via IPv6

And here’s the achievement badge I waited so patiently (because of the PTR delegation) to finally earn 🎉

IPv6 Certification Badge for pilif

I can’t wait for the accompanying T-Shirt to arrive 😃

Why we need IPv6

As we are running out of IPv4 network addresses (and yes, we are), there’s only two possible future scenarios and one of the two, most people are not going to like at all.

As IP addresses get more and more scarce, things will start to suck for both clients and content providers.

As more and more clients connect, carrier grade NAT will become the norm. NAT already sucks, but at least you get to control it and using NAT-PMP or UPnP, applications in your network get some control over being able to accept incoming connections.

Carrier Grade NAT is different. That’s NAT being done on the ISPs end, so you don’t get to open ports at all. This will affect gaming performance, it will affect your ability to use VoIP clients and of course file sharing clients.

For content providers on the other hand, it will become more and more difficult to get the public IP addresses needed for them to be able to actually provide content.

Back in the day, if you wanted to launch a service, you would just do it. No need to ask anybody for permission. But in the future, as addresses become scarce and controlled by big ISPs which are also acting as content provider, the ISPs become the gatekeepers for new services.

Either you do something they like you to be doing, or you don’t get an address: As there will be way more content providers fighing over addresses than there will be addresses available, it’s easy for them to be picky.

Old companies who still have addresses of course are not affected, but competing against them will become hard or even impossible.

More power to the ISPs and no competition for existing content providing services both are very good things for players already in the game, so that’s certainly a possible future they are looking forward to.

If we want to prevent this possible future from becoming reality, we need a way out. IPv4 is draining up. IPv6 exists for a long time, but people are reluctant to upgrade their infrastructure.

It’s a vicious cycle: People don’t upgrade their infrastructure to IPv6 because nobody is using IPv6 and nobody is using IPv6 because there’s nothing to be gained from using IPv6.

If we want to keep the internet as an open medium, we need to break the cycle. Everybody needs to work together to provide services over IPv6, to the point of even offering services over IPv6 exclusively.

Only then can we start to build pressure for ISPs to support IPv6 on their end.

If you are a content provider, ask your ISP for IPv6 support and start offering your content over IPv6. If you are an end user, pressure your ISP to offer IPv6 connectivity.

Knowing this, even one year ago, after getting motivated by my awesome ISP who offered IPv6 connectivity ever since, I started to get our commercial infrastructure up to speed.

Read on to learn how that went.

The Future of the JRPG genre

After an underwhelming false start with Xenoblade Chronicles back when the game came out, the re-release on the 3DS made my give it another try and now that I’m nearly through with the game (just beat the 3rd last main quest boss), I feel compelled to write my first game review after many years of non-gaming content here.

«Review» might not be the entirely correct term though as this article is about to explain why I personally believe Xenoblade to be one of the best instances of the JPRG genre and might actually be very high up there in my list of all-time favorite games.

But first, let’s talk about what’s not so good at the game and why I nearly have missed this awesome game: If I had to list the shortcomings in this masterpiece, it would be the UI design of the side-questing system and the very, very slow start of the story.

First the story: After maybe an hour of play time, the player is inclined to think to have been thrown into the usual revenge plot, this time about a fight against machine based life-forms, but a simple revenge-plot none the less. Also, to be honest, it’s not even a really interesting revenge-plot. It feels predictable and not at all like what we’re usually used to from the genre.

Once you reach the half-time mark of the game, the subtle hints that the game’s dropping on you before that start to become less and less subtle, revealing to the player that they got it all wrong.

The mission of the game changes completely to the point of even completely changing whom you are fighting against and turning around many things you’ve taken for granted for the first half.

This is some of the most impressive story-development I’ve seen so far and also came as a complete surprise to me.

So what felt like the biggest shortcoming of the game (lackluster story) suddenly turned into one of its strongest points.

«Other games of the genre also did this» you might think as you compare this to Final Fantasy XII, but where that game unfortunately never really takes off nor adds any bigger plot-twists, the thing that Xenoblade does after the half-time marker is simply mind-blowing to the point of me refusing to post any spoilers even though the game is quite old by now.

So we have a game that gets amazing after 20-40 hours (depending on how you deal with the side-quests). What’s holding us over until then?

The answer to that question is the reason why I think that Xenoblade is one of the best JRPGs so far: What’s holding us over in the first 40 hours of the game is, you know, gameplay.

The battle system feels like it has been lifted from current MMORPGs (I’m mostly referring to World of Warcraft here as that’s the one I know best), though while it has been scaled down in sheer amount of skills, the abilities themselves have been much better balanced between the characters, which of course is possible in a single-player game.

The game’s affinity system also greatly incentivises the player to switch their party around as they play the game. This works really well when you consider the different play styles offered by the various characters. A tank plays differently from DPS which plays differently from the (unfortunately only one) healer.

But even between members of the same class there are differences in play style leading to a huge variety for players.

This is the first JRPG where I’m actually looking forward to combat – it’s that entertaining.

While the combat sometimes can be a bit difficult, especially because randomness still plays a huge part, it’s refreshing to see that the game doesn’t punish you at all for failing: If you die you just respawn at the last waypoint and usually there’s one of these right in front of the boss.

Even better, normally, the fight just starts again, skipping all introductory cutscenes. And even if there still is some cut-scenes not skipped automatically: The game always allows cutscenes to be skipped.

This makes a lot of sense, because combat is actually so much fun that there’s considerable replay-value to the game which gets much enforced by skippable cutscenes, though some of them you would never ever in your life want to skip – they are so good (you know which ones I’m referring to).

Combat is only one half of the gameplay, the other is exploration: The world of the game is huge and for the first time ever in a JPRG, the simple rule of «you can see it, you can go there» applies. For the first time ever, the huge world is yours to explore and to enjoy.

Never have I seen such variety in locations, especially, again, in the second half of the game which I really don’t want to spoil here.

Which brings us to the side-quests: Imagine that you have a quest-log like you’re used to from MMORPGs with about the same style of quests: Find this item, kill these normal mobs, kill that elite mob, talk to that other guy – you know the drill.

The non-unique and somewhat random dialog lines between the characters as they accept these side-quests break the immersion a bit.

But the one big thing that’s really annoying about the side-quests is discoverability: As a player you often have no idea where to go due to the vague quest texts and, worse, many (most) quests are hidden and only become available after you trigger some event or you talk to the correct (seemingly unrelated) NPC.

While I can understand the former issue (vague quest descriptions) from a game-play perspective, the latter is inexcusable, especially as the leveling curve of the game and the affinity system both really are designed around you actually doing these side-quests.

It’s unfair and annoying that playing hide-and seek for hours is basically a fixed requirement to having a chance at beating the game. This feels like a useless prolonging of the existing game for no reason but to, you know, prolong the game.

Thankfully though, by now, the Wiki exists, so whether you’re on the Wii or the 3DS, just have an iPad or Laptop close to you as you do the side-questy parts of the game.

Once you’re willing to live with this issue, then the absolutely amazing gameplay comes into effect again: Because exploration is so much fun, because the battle system is so much fun, then suddenly the side-quests become fun too, once you remove the annoying hide-and-seek aspect.

After all, it’s the perfect excuse to do more of what you enjoy the most: Playing the game.

This is why I strongly believe that this game would have been so much better with a more modern quest-log system: Don’t hide (most of the) quests! Be precise in explaining where to find stuff! You don’t have to artificially prolong the game: Even when you know where to go (I did thanks to the Wiki), there’s still more than 100 hours of entertainment there to be had.

The last thing about quests: Some of the quests require you to find rare items which to get you have a random chance by collecting «item orbs» spread all over the map. This is of course another nice way to encourage exploration.

But I see no reason why the drop rate must be random, especially as respawning the item orbs either requires you to wait 10 to 30 minutes or, saving and reloading the game.

If you want to encourage exploration, hide the orbs! There’s so much content in this game that aritifically prolonging it with annyoing saving and reloading escapades is completely unnecessary.

At least, the amount of grinding required isn’t so bad to the point of being absolutely bearable for me and I have nearly zero patience for grinding.

Don’t get me wrong though: Yes, these artificial time-sinks were annoying (and frankly 100% unneeded), but because the actual gameplay is so much fun, I didn’t really mind them that much.

Finally, there are some technical issues which I don’t really mind that much however: Faces of characters look flat and blurry which is very noticable in the cut-scenes which are all rendered by the engine itself (which is a very good thing).

Especially on the 3DS the low resolution of the game is felt badly (the 3DS is much worse than the Wii to the point of objects sometimes being invisible) and there’s some objects popping into view at times. This is mostly a limitation of the hardware which just doesn’t play well with the huge open world, so I can totally live with it. It only minimally affects my immersion into the game.

If you ask me what is the preferred platform to play this on, I would point at the Wii version though, of course, it’ll be very hard to get the game at this point in time (no. you can’t have my copy).

the good

So after all of this, here’s a list of the unique features of this game it has over all other members of its genre:

  • Huge world that can be explored completely. No narrow hallways but just huge open maps.
  • Absolutely amazing battle system that goes far beyond of the usual «select some action from this text-based menu»
  • Skippable cutscenes which together with the battle system make for a high replayability
  • Many different playable characters with different play styles
  • Great music by the god-like Mr. Mitsuda
  • A very, very interesting story once you reach the mid-point of the game
  • Very believable characters and very good character development
  • Some of the best cutscene direction I have ever seen in my life – again, mostly after the half-time mark (you people who played the game know which particular one I’m talking about – still sends shivers down my spine).

My wishes for the future

The game is nearly perfect in my opinion, but there are two things I think would be great to be fixed in the successor or any other games taking their inspiration from Xenoblade:

First, please fix the quest log and bring it to the current decade of what we’re used to from MMORPGs (where you lifted the quest design off to begin with): Show us where to get the quests, show us where to do them.

Second, and this one is even bigger in my opinion: Please be more considerate in how you represent women in the game. Yes, the most bad-ass characters in the game are women (again, I can’t spoil anything here). Yes, there’s a lot of depth to the characters of women in this game and they are certainly not just there for show but are actually instrumental to the overall story development (again, second part).

But why does most of the equipment for the healer in the game have to be practically underwear? Do you really need to spend CPU resources on (overblown) breast physics when you render everybodies faces blurry and flat?

Wouldn’t it be much better for the story and the immersion if the faces looked better at the cost of some (overblown) jiggling?

Do you really have to constantly show close-ups of way too big breasts of one party member? This is frankly distracting from what is going on in the game.

I don’t care about cultural differences: You managed to design very believable and bad-ass women into your game. Why do you have to diminish this by turning them into a piece of furniture to look at? They absolutely stand on their own with their abilities and their character progression.

It is the year 2015. We can do better than this (though, of course, the world was different in 2010 when the game initially came out).

Conclusion

All of that aside: Because of the amazing game play, because of the mind-blowing story, because of the mind-blowing custscene-direction and because of the huge world that’s all but narrow passages, I love this game more than many others.

I think that this is the first time that the JRPG game really has moved forward in about a decade and I would definitely like to see more games ripping off the good aspects of Xenoblade (well – basically everything).

As such I’m very much looking forward for the games successor to become available here in Europe (it has just come out in Japan and my Japanese still is practically non-existent) and I know for a fact that I’m going to play it a lot, especially as I now know to be patience with the side-quests.

Geek heaven

If I had to make a list of attributes I would like the ISP of my dream to
have, then, I could write quite the list:

  • I would really like to have native IPv6 support. Yes. IPv4 will be sufficient for a very long time, but unless pepole start having access to IPv6, it’ll never see the wide deployment it needs if we want the internet to continue to grow. An internet where addresses are only available to people with a lot of money is not an internet we all want to be subjected to (see my post «asking for permission»)
  • I would want my ISP to accept or even support network neutrality. For this to be possible, the ISP of my dreams would need to be nothing but an ISP so their motivations (provide better service) align with mine (getting better service). ISPs who also sell content have all the motivation to provide crappy Internet service in order to better sell their (higher-margin) content.
  • If I have technical issues, I want to be treated as somebody who obviously has a certain level of technical knowledge. I’m by no means an expert in networking technology, but I do know about powering it off and on again. If I have to say «shibboleet» to get to a real technicial, so be it, but if that’s not needed, that’s even better.
  • The networking technology involved in getting me the connectivity I want should be widely available and thus easily replacable if something breaks.
  • The networking technology involved should be as simple as possible: The more complex the hardware involved, the more stuff can break, especially when you combine cost-pressure for end-users with the need for high complexity.
  • The network equipment I’m installing at my home and which has thus access to my LAN needs to be equipment I own and I control fully. I do not accept leased equipment to which I do not have full access to.
  • And last but not least, I would really like to have as much bandwidth as possible

I’m sure I’m not alone with these wishes, even though, for «normal people» they might seem strange.

But honestly: They just don’t know it, but they too have the same interests. Nobody wants an internet that works like TV where you pay for access to a curated small list of “approved” sites (see network neutrality and IPv6 support).

Nobody wants to get up and reboot their modem here and then because it crashed. Nobody wants to be charged with downloading illegal content because their Wifi equipment was suddenly repurposed as an open access point for other customers of an ISP.

Most of the wishes I list above are the basis needed for these horror scenarios never coming to pass, however unlikely the might seem now (though getting up and rebooting the modem/router is something we already have to deal with today).

So yes. While it’s getting rarer and rarer to get all the points of my list fulfilled, to the point where I though this to be impossible to get all of it, I’m happy to say that here in Switzerland, there is at least one ISP that does all of this and more.

I’m talking about Init7 and especially their awesome FTTH offering Fiber7 which very recently became available in my area.

Let’s deal with the technology aspect first as this really isn’t the important point of this post: What you get from them is pure 1Gbit/s Ethernet. Yes, they do sell you a router box if you want one, but you can just as well just get a simple media converter, or just an SFP module to plug into any (managed) switch (with SFP port).

If you have your own routing equipment, be it a linux router like my shion or be it any Wifi Router, there’s no need to add any kind of additional complexity to your setup.

No additional component that can crash, no software running in your home to which you don’t have your password to and certainly no sneakily opened public WLANs (I’m looking at you, cablecom).

Of course you get native IPv6 (a /48 which incidentally is room for 281474976710656 whole internets in your apartment) too.

But what’s really remarkable about Init7 isn’t the technical aspect (though, again, it’s bloody amazing), but everything else:

  • Init7 was one of the first ISPs in Switzerland to offer IPv6 to end users.
  • Init7 doesn’t just support network neutrality.
    They actively fight for it
  • They explicitly state
    that they are not selling content and they don’t intend to start doing so. They are just an ISP and as such their motivations totally align with mine.

There are a lot of geeky soft factors too:

  • Their press releases are written in Open Office (check the PDF properties
    of this one for example)
  • I got an email from a technical person on their end that was written using
    f’ing Claws Mail on Linux
  • Judging from the Recieved headers of their Email, they are using IPv6 in their internal LAN – down to the desktop workstations. And related to that:
  • The machines in their LAN respond to ICMPv6 pings which is utterly crazy cool. Yes. They are firewalled (cough I had to try. Sorry.), but they let ICMP through. For the not as technical readers here: This is as good an internet citizen as you will ever see and it’s extremely unexpected these days.

If you are a geek like me and if your ideals align with the ones I listed above, there is no question: You have to support them. If you can have their Fiber offering in your area, this is a no-brainer. You can’t get synchronous 1GBit/s for CHF 64ish per month anywhere else and even if you did, it wouldn’t be plain Ethernet either.

If you can’t have their fiber offering, it’s still worth considering their other offers. They do have some DSL based plans which of course are technically inferior to plain ethernet over fiber, but you would still support one of the few remaining pure ISPs.

It doesn’t have to be Init7 either. For all I know there are many others, maybe even here in Switzerland. Init7 is what I decided to go with initially because of the Gbit, but the more I leared about their philosophy, the less important the bandwith got.

We need to support companies like these because companies like these are what ensures that the internet of the future will be as awesome as the internet is today.

why I don’t touch crypto

When doing our work as programmers, we screw up. Small bugs, big bugs, lazyness – the possibilties are endless.

Usually, when we screw up, we know that immediately: We get a failing test, we get an exception logged somewhere, or we hear from our users that such and such feature doesn’t work.

Also, most of the time, no matter how bad the bug, the issue can be worked around and the application keeps working overall.

Once you found the bug, you fix it and everybody is happy.

But imagine you had one of these off-by-one errors in your code (those that constantly happen to all of us) and further imagine that the function where the error was in was still apparently producing the same output as if the error wasn’t there.

Imagine that because of that error the apparently correctly looking output is completely useless and your whole application has just now utterly broken.

That’s crypto for you.

Crypto can’t be a «bit broken». It can’t be «mostly working». Either it’s 100% correct, or you shouldn’t have bothered doing it at all. The weakest link breaks the whole chain.

Worse: looking at the data you are working with doesn’t show any sign of wrongness when you look at it. You encrypt something, you see random data. You decrypt it, you see clear text. Seems to work fine. Right! Right?

Last week’s issue in the random number generator in Cryptocat is a very good example.

The bug was an off-by-one error in their random number generator. The output of the function was still random numbers, looking at the output would clearly show random numbers. Given that fact, the natural bias for seeing code as being correct is only reinforced.

But yet it was wrong. The bug was there and the random numbers weren’t really random (enough).

The weakest link was broken, the whole effort in security practically pointless, which is even worse in this case of an application whose only purpose is, you know, security.

Security wasn’t just an added feature to some other core functionality. It was the core functionality.

That small off-by-one error has completely broken the whole application and was completely unnoticable by just looking at the produced output. Writing a testcase for this would have required complicated thinking and coding which would be as likely to contain an error as it was likely for the code to be tested to contain an error.

This, my friends, is why I keep my hands off crypto. I’m just plain not good enough. Crypto is a world where understanding the concepts, understanding the math and writing tests just isn’t good enough.

The goal you have to reach is perfection. If you fail to reach that, than you have failed utterly.

Crypto is something I leave to others to deal with. Either they have reached perfection at which point they have my utmost respect. Or they fail at which point they have my understanding.