Why I recommend against JWT

Json Web Tokens are all the rage lately. They are lauded as being a stateless alternative to server-side cookies and as the perfect way to use authentication in your single-page app and some people also sell them as a work around for the EU cookie policy because, you know, they work without cookies too.

If you ask me though, I would always recommend against the use of JWT to solve your problem.

Let me give you a few arguments to debunk, from worse to better:

Debunking arguments

It requires no cookies

General “best” practice stores JWT in the browsers local storage and then sends that off to the server in all authenticated API calls.

This is no different from a traditional cookie with the exception that transmission to the server isn’t done automatically by the browsers (which a cookie would be) and that it is significantly less secure than a cookie: As there is no way to set a value in local storage outside of JavaScript, there consequently is no feature equivalent to cookies’ httponly. This means that XSS vulnerabilities in your frontend now give an attacker access to the JWT token.

Worse, as people often use JWT for both a short-lived and a refresh token, this means that any XSS vulnerability now gives the attacker to a valid refresh token that can be used to create new session tokens at-will, even when your session has expired, in the process completely invalidating all the benefits of having separate refresh and access tokens.

“But at least I don’t need to display one of those EU cookie warnings” I hear you say. But did you know that the warning is only required for tracking cookies? Cookies that are required for the operation of your site (so a traditional session cookie) don’t require you to put up that warning in the first place.

It’s stateless

This is another often used argument in favour of JWT: Because the server can put all the required state into them, there’s no need to store any thing on the server end, so you can load-balance incoming requests to whatever app server you want and you don’t need any central store for session state.

In general, that’s true, but it becomes an issue once you need to revoke or refresh tokens.

JWT is often used in conjunction with OAuth where the server issues a relatively short-lived access token and a longer-lived refresh token.

If a client wants to refresh its access token, it’s using its refresh token to do so. The server will validate that and then hand out a new access token.

But for security reasons, you don’t want that refresh token to be re-used (otherwise, a leaked refresh token could be used to gain access to the site for its whole validity period) and you probably also want to invalidate the previously used access token otherwise, if that has leaked, it could be used until its expiration date even though the legitimate client has already refreshed it.

So you need a means to black-list tokens.

Which means you’re back at keeping track of state because that’s the only way to do this. Either you black-list the whole binary representation of the token, or you put some unique ID in the token and then blacklist that (and compare after decoding the token), but what ever you do, you still need to keep track of that shared state.

And once you’re doing that, you lose all the perceived advantages of statelessness.

Worse: Because the server has to invalidate and blacklist both access and refresh token when a refresh happens, a connection failure during a refresh can leave a client without a valid token, forcing users to log in again.

In todays world of mostly mobile clients using the mobile phone network, this happens more often than you’d think. Especially as your access tokens should be relatively short-lived.

It’s better than rolling your own crypto

In general, yes, I agree with that argument. Anything is better than rolling your own crypto. But are you sure your library of choice has implemented the signature check and decryption correctly? Are you keeping up to date with security flaws in your library of choice (or its dependencies).

You know what is still better than using existing crypto? Using no crypto what so ever. If all you hand out to the client to keep is a completely random token and all you do is look up the data assigned to that token, then there’s no crypto anybody could get wrong.

A solution in search of a problem

So once all good arguments in favour of JWT have dissolved, you’re left with all their disadvantages:

  • By default, the JWT spec allows for insecure algorithms and key sizes. It’s up to you to chose safe parameters for your application
  • Doing JWT means you’re doing crypto and you’re decrypting potentially hostile data. Are you up to this additional complexity compared to a single primary key lookup?
  • JWTs contain quite a bit of metadata and other bookkeeping information. Transmitting this for every request is more expensive than just transmitting a single ID.
  • It’s brittle: Your application has to make sure to never make a request to the server without the token present. Every AJAX request your frontend makes needs to manually append the token and as the server has to blacklist both access and refresh tokens whenever they are used, you might accidentally end up without a valid token when the connection fails during refresh.

So are they really useless?

Even despite all these negative arguments, I think that JWT are great for one specific purpose and that’s authentication between different services in the backend if the various services can’t trust each other.

In such a case, you can use very short-lived tokens (with a lifetime measured in seconds at most) and you never have them leave your internal network. All the clients ever see is a traditional session-cookie (in case of a browser-based frontend) or a traditional OAuth access token.

This session cookie or access token is checked by frontend servers (which, yes, have to have access to some shared state, but this isn’t an unsolvable issue) which then issue the required short-lived JW tokens to talk to the various backend services.

Or you use them when you have two loosely coupled backend services who trust each other and need to talk to each other. There too, you can issue short-lived tokens (given you are aware of above described security issues).

In the case of short-lived tokens that never go to the user, you circumvent most of the issues outlined above: They can be truly stateless because thank to their short lifetime, you don’t ever need to blacklist them and they can be stored in a location that’s not exposed to possible XSS attacks against your frontend.

This just leaves the issue of the difficult-to-get-right crypto, but as you never accept tokens from untrusted sources, a whole class of possible attacks becomes impossible, so you might even get away with not updating on an too-regular basis.

So, please, when you are writing your next web API that uses any kind of authentication and you ask yourself “should I use JWT for this”, resist the temptation. Using plain opaque tokens is always better when you talk to an untrusted frontend.

Only when you are working on scaling our your application and splitting it out into multiple disconnected microservices and you need a way to pass credentials between them, then by all means go ahead and investigate JWT – it’ll surely be better than cobbling something up for yourself.

Technology driven life changes

Last year, when I talked about finally seeing the Apple Watch becoming mildly useful, I had no idea what kind of a ride I was going to be on.

Generally, I’m not really concerned about my health nor fitness, but last September, when my wonderful girlfriend left for a year of study in England, I decided that I finally had enough and I wanted to lose weight.

Having a year of near-zero social obligations would totally allow me to adjust my life-style in a way that’s conducive to weight loss, so here’s what I started doing:

  • During weekdays, I greatly reduced my calorie intake to basically just a salad and a piece of bread every day (you can pry my bread from my cold dead hands – it’s the one food I think I like the most).
  • Every day, no matter the weather, no matter the workload, no matter what, I was going to walk home after my work-day in the office, or on weekends, I would just take an equivalent walk.
  • Every day, I wanted to fill the “Activity” and the “Exercise” rings on my apple watch.

Now walking home sounds like nothing special, but I’m privileged to live in Zürich Switzerland, which means that I have very easy access to forests to walk in.

So commuting home by foot meant that I could walk at least 8 kilometres (4.9 miles), climbing 330m (1082 feet), most of it through the forest.

Every day, no matter whether it was way too hot, way too cold, whether it was raining, hailing or snowing, I would walk home. And every day I would be using my Apple Watch to track what I would generously call a “Workout” (even though it was just walking – but if you go from zero sports to that, I guess it’s ok to call it that).

From September to December I started gradually increasing the distance I walk.

This is the other great thing about Zürich: Once you reach the forest (which you do by walking 20 minutes in practically any direction), you can stay in the forest for hours and hours.

First I extended the 8km walk to 10km, then 12, then 14 and finally 19 (11 miles).

During that time, I kept tracking all the vital signs I could track between the Apple Watch and a Withings scale I bought 1-2 months into this.

My walks got faster and my heart rate at rest got lower and lower, from 80 to now 60.

Every evening after the walk, I would look at the achievements handed out by my Watch which is also why I’ve never updated my movement goals in the Activity app because getting all these badges, honestly, was a lot of fun and very motivational. Every evening I would get notified of increasing my movement streak, of doubling or even tripling my movement goal and of tripling or quadrupling my exercise goal.

screenshot of the activity app

Every morning I would weigh myself and bask in the glory of the ever falling graph painted by the (back then very good) iOS app that came with the scale. I would manage to lose a very consistent 2kg (4.4lbs) per week.

Every walk I would have the chance to experience some of natures beauty.

Crazy sunsets

sunset

Beautiful sunrises

sunrise

Enchanted forests

wintery forest

And frozen creeks

frozen creek

And in spring I could watch trees grow.

When I got home after up to three hours of walk, I was dead tired at around 10pm, meaning that for the first time in ages I would get more than enough sleep and I would still be able to get up between 6 and 7.

By mid of March, after 6 months of a very strict diet and walking home every day, I was done. I had lost 40kg (88.1 lbs).

Now the challenge shifted from losing weight to not gaining weight. I decided to make the diet less strict but also continue with my walks, though I would not do the regular 19km ones any more as they would just take too long (3 hours).

But by June, I really started to notice a change: I wouldn’t feel these walks at all any more. No sweat, no reasonable change in heart rate while on them, no tiredness. The walks really felt like a waste of time.

So I started running.

I never liked running. I was always bad at it. All the way through school where I was the slowest and always felt really bad afterward, through my life until now where I just never did it. Running felt bad and I hated it.

But now things were different.

The first time I changed from walking to running, I did so after reaching the peak altitude, so it was mostly straight and a little bit downhill. But still: I ran 4km (2.48 miles) and when I got home I didn’t feel much more tired.

I was very surprised because running 4km through all of my life would have been completely unthinkable to me, but there I was. I just did.

So next day, I decided to run most of the way, just skipping the steepest parts. Suddenly, there I was, running 8km (4.9 miles), still not feeling particularly tired afterwards.

So I started tracking these runs (using both Runkeeper and Strava for technical reasons – but that’s another post), seeing improvement in my time all the way through July.

And then, on August 1st, I ran half a Marathon climbing 612m (2007 feet)

Screen Shot 2017-08-04 at 11.40.28.png

Considering that this was my first, it’s not even in too bad a time and what’s even more fun to me: I didn’t even feel too tired afterwards and I totally felt like I could run even farther.

So I guess after taking it very slowly and moving from walking a bit to walking more to walking a lot to running a bit to running some more, even I, the most unathletic person possible can push myself into shape.

But what’s the most interesting aspect in all of this is that without technology, the Apple Watch in particular, without the cheesy achievements, none of this would ever have been possible. I hated sports and I’m honestly still not really interested. But the prospect of getting awarded some stupid batches every day is what finally pushed me.

And now, in only a single month, my girlfriend will finally return to Switzerland and I guess she’ll find me in better shape than she’s ever seen me before in our lives. I hope that the prospect of collecting some more batches from my watch will keep me going even when the social pressure might want to tempt me into skipping a workout.

Apple Watch starting to be useful

Even after the Time for Coffee app has been updated for WatchOS 2.0 support last year and my Apple Watch has become significantly more useful, the fact that the complication didn’t get a chance to update very often and the fact that launching the app took an eternity kind of detracted from the experience.

Which lead to me not really using the watch most of the time. I’m not a watch person. Never was. And while the temptation of playing with a new gadget lead to me wearing it on and off, I was still waiting for the killer feature to come around.

This summer, this has changed a lot.

I’m in the developer program, so I’m running this summer’s beta versions and Apple has also launched Apple Pay here in Switzerland.

So suddenly, by wearing the watch, I get access to a lot of very nice features that present themselves as huge user experience improvements:

  • While «Time for Coffee»’s complication currently is flaky at best, I can easily attribute this to WatchOSes current Beta state. But that doesn’t matter anyways, because the Watch now keeps apps running, so whenever I need public transport departure information and when the complication is flaky, I can just launch the app which now comes up instantly and loads the information within less than a second.
  • Speaking of leaving apps running: The watch can now be configured to revert to the clock face only after more than 8 minutes have passed since the last use. This is perfect for the Bring shopping list app which now suddenly is useful. No more taking the phone out while shopping.
  • Auto-Unlocking the Mac by the presence of an unlocked and worn watch has gone from not working at all, to working rarely, to working most of the time as the beta releases have progressed (and since Beta 4 we also got the explanation that WiFi needs to be enabled on the to-be-unlocked mac, so now it works on all machines). This is very convenient.
  • While most of the banks here in Switzerland boycott Apple Pay (a topic for another blog entry – both the banks and Apple are in the wrong), I did get a Cornèrcard which does work with Apple Pay. Being able to pay contactless with the watch even for amounts larger than CHF 50 (which is the limit for passive cards) is amazing.

Between all these features, I think there’s finally enough justification for me to actually wear the watch. It still happens that I forget to put it on here and then, but overall, this has totally put new life into this gadget, to the point where I’m inclined to say that it’s a totally new and actually very good experience now.

If you were on the fence before, give it a try come next autumn. It’s really great now.

AV Programs as a Security Risk

Imagine you were logged into your machine as an administrator. Imagine you’re going to double-click every single attachment in every single email you get. Imagine you’re going to launch every single file your browser downloads. Imagine you answer affirmative on every single prompt to install the latest whatever. Imagine you unpack every single archive sent to you and you launch ever single file in those archives.

This is the position that AV programs put themselves on your machine if they want to have any chance at being able to actually detect malware. Just checking whether a file contains a known byte signature has stopped being a reliable method for detecting viruses long ago.

It makes sense. If I’m going to re-distribute some well-known piece of malware, all I have to do is to obfuscate it a little bit or encrypt it with a static key and my piece of malware will naturally not match any signature of any existing malware.

The loader-stub might, but if I’m using any of the existing installer packagers, then I don’t look any different than any other setup utility for any other piece of software. No AV vendor can yet affort to black-list all installers.

So the only reliable way to know whether a piece of software is malware or not, is to start running it in order to at least get it to extract/decrypt itself.

So here we are in a position where a anti malware program either is useless placebo or it has to put itself into the position I have started this article with.

Personally, I think it is impossible to safely run a piece of software in a way that it cannot do any harm to the host machine.

AV vendors could certainly try to make it as hard as possible for malware to take over a host machine, but here we are in 2016 where most of the existing AV programs are based on projects started in the 90ies where software quality and correctness was even less of a focus than it is today.

We see AV programs disabling OS security features, installing and starting VNC servers and providing any malicious web site with full shell access to the local machine. Or allow malware to completely take over a machine if a few bytes are read no matter where from.

This doesn’t cover the privacy issues yet which are caused by more and more price-pressure the various AV vendors are subject to. If you have to sell the software too cheap to pay for its development (or even give it away for free), then you need to open other revenue streams.

Being placed in such a privileged position like AV tools are, it’s no wonder what kinds of revenue streams are now in process of being tapped…

AV programs by definition put themselves into an extremely dangerous spot on your machine: In order to read every file your OS wants to access, it has to run with adminitrative rights and in order to actually protect you it has to understand many, many more file formats than what you have applications for on your machine.

AV software has to support every existing archive format, even long obsolete ones because who knows – you might have some application somewhere that can unpack it; it has to support every possibly existing container format and it has to support all kinds of malformed files.

If you try to open a malformed file with some application, then the application has the freedom to crash. An AV program must keep going and try even harder to see into the file to make sure it’s just corrupt and not somehow malicious.

And as stated above: Once it finally got to some executable payload, it often has no chance but to actually execute it, at least partially.

This must be some of the most difficult thing to get right in all of engineering: Being placed at a highly privileged spot and being tasked to then deal with content that’s malicious per definitionem is an incredibly difficult task and when combined with obviously bad security practices (see above), I come to the conclusion that installing AV programs is actually lowering the overall security of your machines.

Given a fully patched OS, installing an AV tool will greatly widen the attack surface as now you’re putting a piece of software on your machine that will try and make sense of every single byte going in and out of your machine, something your normal OS will not do.

AV tools have the choice of doing nothing against any but the most common threats if they decide to do pure signature matching, or of potentially putting your machine at risk.

AV these days might provide a very small bit of additional security against well-known threats (though against those you’re also protected if you apply the latest OS patches and don’t work as an admin) but they open your installation wide for all kinds of targeted attacks or really nasty 0-day exploits that can bring down your network without any user-interaction what so ever.

If asked what to do these days, I would give the strong recommendation to not install AV tools. Keep all the software you’re running up to date and white-list the applications you want your users to run. Make use of white-listing by code-signatures to, say, allow everything by a specific vendor. Or all OS components.

If your users are more tech-savy (like developers or sys admins), don’t whitelist but also don’t install AV on their machines. They are normally good enough to not accidentally run malware and the risk of them screwing up is much lower than the risk of somebody exploiting the latest flaw in your runs-as-admin-and-launches-every-binary piece of security software.

The new AppleTV

When the 2nd generation of the AppleTV came out and offered AirPlay support, I bought one more or less for curiosity value, but it worked so well in conjunction with AirVideo that it has completely replaced my previous attempts at an in-home media center system.

It was silent, never really required OS or application updates, never crashed and never overheated. And thanks to AirVideo it was able to play everthing I could throw at it (at the cost of a server running in the closet of course).

The only inconvenience was the fact that I needed too many devices. Playing a video involved my TV, the AppleTV and my iOS device. Plus remotes for TV and AppleTV. Personally, I didn’t really mind much, but while I would have loved to give my parents access to my media library (1 Gbit/s upstream FTW), the requirement to use three devices and to correctly switch on AirPlay has made this a complete impossibility due to the complexity.

So I patiently awaited the day when the AppleTV would finally be able to run apps itself. There was no technical reason to prevent that – the AppleTV totally was powerful enough for this and it was already running iOS.

You can imagine how happy I was when finally I got what I wanted and the new 4th generation AppleTV was announced. Finally a solution my parents can use. Finally something to help me to ditch the majority of the devices involved.

So of course I bought the new device the moment it became available.

I even had to go through additional trouble due to the lack of the optical digital port (the old AppleTV was connected to a Sonos playbar), but I found an audio extractor that works well enough.

So now after a few weeks of use, the one thing that actually pushed me to write this post here is the fact that the new AppleTV is probably the most unfinished and unpolished product that I have ever bought from Apple. Does it work? Yes. But the list of small oversights and missing pieces is as big as I have never seen in an Apple product. Ever.

Let me give you a list – quite like what I’ve done 12 years ago for a very different device

  • While the AppleTV provides you with an option to touch it with an iOS device for the configuration of the Wifi and the appleid settings, I still had to type in my appleid password twice: Once for the AppStore and once for the game center. Mind you, my appleid password is 30 characters long, containing uppercase, lowercase digits and symbols. Have fun doing this on the on-screen keyboard
  • The UI is laggy. The reason for having to type in the game center password was because the UI was still loading the system appleid as I was pressing the “Press here to login button”. First nothing happened, then the button turned into a “Press here to sign out button” and then the device reacted to my button press. Thank you
  • The old AppleTV supported either the Remote app on an iPhone or even a bluetooth keyboard for character entry. The new one doesn’t support any of this, so there’s really no way around the crappy on-screen keyboard.
  • While the device allows you to turn off automatic app updates, there is no list of apps with pending updates. There’s only “Recently updated”, but that’s a) limited to 20 apps, b) lists all recently updated apps, c) gives no indication what app is updated yet and what isn’t and finally d) isn’t even sorted by date of the last update. This UI is barely acceptable for enabled automatic updates, but completely unusable if you want them disabled to the point that I decided to just bite the bullet and enable them.
  • The sound settings offer “Automatic”, “Stereo” and “Dolby Surround”. Now, “Dolby Surround” is a technology from the mid-90-ies that encodes one additional back-channel in a stereo signal and is definitely not what you want (which would be “Dolby Digital”). Of course I’ve assumed that there’s some “helpfulnes” at work here, detecting the fact that my TV doesn’t support Dolby Digital (but the playbar does, so it’s perfectly fine to send out AC/3 sinal). Only after quite a bit of debugging I found out that what Apple calls “Dolby Surround” is actually “Dolby Digital”. WHY??
  • The remote is way too sensitive. If you so much as lift it up, you’ll start seeking your video (which works way better than anything I’ve seen before, but still…)
  • Until the first update (provided without changelog or anything the like), the youtube app would start to constantly interrupt playback and reload the stream once you paused a video once.
  • Of course in Switzerland, Siri doesn’t work, even though I would totally be able to use it in english (or german – it’s available in Germany after all) Not that it matters because the Swiss store is devoid of media I’d actually be interested in anyways and there’s no way for third-parties to integrate into the consolidated system-wide interface for media browsing.
  • Home Sharing doesn’t work for me. At. All. Even after typing in my Apple ID password a third time (which, yes, it asked me to).
  • It still doesn’t wake up on network access, nor appear in the list of Airplay-able devices of my phone when it’s in sleep mode. This only happens in one segment of my network, so it might be an issue with a switch though – wouldn’t be the first time :/

I’m sure as time goes on we’ll see updates to fix this mess, but I cannot for the life of me understand why Apple thought that the time was ready to release this.

Again: It works fine and I will be brining one to my mother next Friday because I know she’ll be able to use it just fine (especially using the Plex app). But this kind of lack of polish we’re used to on Android and Windows. How can Apple produce something like this?

IPv6 in production

Yesterday, I talked about why we need IPv6 and to make that actually happend, I decided to do my part and make sure that all of our infrastructure is available over IPv6.

Here’s a story of how that went:

First was to request an IPv6 allocation by our hosting provider: Thankfully our contract with them included a /64, but it was never enabled and when I asked for it, they initially tried to bill us CHF 12/mt extra, but after pointing them to the contract, they started to make IPv6 happen.

That this still took them multiple days to do was a pointer to me that they were not ready at all and by asking, I was forcing them into readyness. I think I have done a good deed there.

dns

Before doing anything else, I made sure that our DNS servers are accessible over IPv6 and that IPv6 glue records existed for them.

We’re using PowerDNS, so actually supporting IPv6 connectivity was trivial, though there was a bit of tweaking needed to tell it about what interface to use for outgoing zone transfers.

Creating the glue records for the DNS servers was trivial too – nic.ch has a nice UI to handle the glue records. I’ve already had IPv4 glue records, so all I had to do was to add the V6 addresses.

web properties

Making our web properties available over IPv6 was trivial. All I had to do was to assign an IPv6 address to our frontend load balancer.

I did not change any of the backend network though. That’s still running IPv4 and it will probably for a long time to come as I have already carefully allocated addresses, configured DHCP and I even know IP addresses by heart. No need to change this.

I had to update the web application itself a tiny bit in order to copy with a REMOTE_ADDR that didn’t quite look the same any more though:

  • there were places where we are putting the remote address into the database. Thankfully, we are using PostgreSQL whose native inet type (it even supports handy type specific operators) supports IPv6 since practically forever. If you’re using another database and you’re sotoring the address in a VARCHAR, be prepared to lengthen the columns as IPv6 addreses are much longer.
  • There were some places where we were using CIDR matching for some privileged API calls we are allowing from the internal network. Of course, because I haven’t changed the internal network, no code change was strictly needed, but I have updated the code (and unit tests) to deal with IPv6 too.

The last step was to add the AAAA record for our load balancer.

From that moment on, our web properties were available via IPv6 and while there’s not a lot of traffic from Switzerland, over in Germany, about 30% of all requests are happening over IPv6.

email

Of the bunch, dealing with email was the most complicated step. Not so much for enabling IPv6 support in the MTA as that was supported since forever (we’re using Exim (warning: very old post)).

The difficulty lied in getting everything else to work smoothly though – mostly in regards to SPAM filtering:

  • Many RBLs don’t support IPv6, so I had to make sure we weren’t accidentally treating all mail delivered to us over IPv6 as spam.
  • If you want to have any chance at your mail being accepted by remote parties, then you must have a valid PTR record for your mail server. This meant getting reverse DNS to work right for IPv6.
  • Of course you also need to update the SPF record now that you are sending email over IPv6.

PTR record

The PTR record was actually the crux of the matter.

In IPv4, it’s inpractical or even impossible to get a reverse delegation for anything smaller than a /24, because of the way how reverse lookup works in DNS. There was RFC 2317 but that was just too much hassle for most ISPs to implement.

So the process normally was to let the ISP handle the few PTR records you wanted.

This changes with IPv6 in two ways: As the allocation is mostly fixed to a /64 or larger and because there are so many IPv6 addreses to allow splitting networks at byte boundaries without being stingy, it is trivially easy to do proper reverse delegation to customers.

And because there are so many addresses available for a customer (a /64 allocation is enough addresses to cover 2^32 whole internets), reverse delegation is the only way to make good use of all these addresses.

This is where I hit my next roadblock with the ISP though.

They were not at all set up for proper reverse delegation – the support ticket I have opened in November of 2014 took over 6 months to finally get closed in May of this year.

As an aside: This was a professional colocation provider for business customers that was, in 2014, not prepared to even just hand out IPv6 addresses and who required 6 months to get reverse delegation to work.

My awesome ISP was handing out IPv6 addresses since the late 90ies and they offer reverse delgation for free to anybody who asks. As a matter of fact, it was them to ask me whether I wanted a reverse delegation last year when I signed up with them.

Of course I said yes :-)

This brought me to the paradoxical situation of having a fully working IPv6 setup at home while I had to wait for 6 months for my commercial business ISP to get there.

it’s done now

So after spending about 2 days learning about IPv6, after spending about 2 days updating our application, after spending one day convincing our ISP to give us the IPv6 allocation they promised in the contract and after waiting 6 months for the reverse delegation, I can finally say that all our services are now accessible via IPv6.

Here are the headers of the very first Email we’ve transmitted via IPv6

And here’s the achievement badge I waited so patiently (because of the PTR delegation) to finally earn 🎉

IPv6 Certification Badge for pilif

I can’t wait for the accompanying T-Shirt to arrive 😃

Why we need IPv6

As we are running out of IPv4 network addresses (and yes, we are), there’s only two possible future scenarios and one of the two, most people are not going to like at all.

As IP addresses get more and more scarce, things will start to suck for both clients and content providers.

As more and more clients connect, carrier grade NAT will become the norm. NAT already sucks, but at least you get to control it and using NAT-PMP or UPnP, applications in your network get some control over being able to accept incoming connections.

Carrier Grade NAT is different. That’s NAT being done on the ISPs end, so you don’t get to open ports at all. This will affect gaming performance, it will affect your ability to use VoIP clients and of course file sharing clients.

For content providers on the other hand, it will become more and more difficult to get the public IP addresses needed for them to be able to actually provide content.

Back in the day, if you wanted to launch a service, you would just do it. No need to ask anybody for permission. But in the future, as addresses become scarce and controlled by big ISPs which are also acting as content provider, the ISPs become the gatekeepers for new services.

Either you do something they like you to be doing, or you don’t get an address: As there will be way more content providers fighing over addresses than there will be addresses available, it’s easy for them to be picky.

Old companies who still have addresses of course are not affected, but competing against them will become hard or even impossible.

More power to the ISPs and no competition for existing content providing services both are very good things for players already in the game, so that’s certainly a possible future they are looking forward to.

If we want to prevent this possible future from becoming reality, we need a way out. IPv4 is draining up. IPv6 exists for a long time, but people are reluctant to upgrade their infrastructure.

It’s a vicious cycle: People don’t upgrade their infrastructure to IPv6 because nobody is using IPv6 and nobody is using IPv6 because there’s nothing to be gained from using IPv6.

If we want to keep the internet as an open medium, we need to break the cycle. Everybody needs to work together to provide services over IPv6, to the point of even offering services over IPv6 exclusively.

Only then can we start to build pressure for ISPs to support IPv6 on their end.

If you are a content provider, ask your ISP for IPv6 support and start offering your content over IPv6. If you are an end user, pressure your ISP to offer IPv6 connectivity.

Knowing this, even one year ago, after getting motivated by my awesome ISP who offered IPv6 connectivity ever since, I started to get our commercial infrastructure up to speed.

Read on to learn how that went.