IPv6 in production

Yesterday, I talked about why we need IPv6 and to make that actually happend, I decided to do my part and make sure that all of our infrastructure is available over IPv6.

Here’s a story of how that went:

First was to request an IPv6 allocation by our hosting provider: Thankfully our contract with them included a /64, but it was never enabled and when I asked for it, they initially tried to bill us CHF 12/mt extra, but after pointing them to the contract, they started to make IPv6 happen.

That this still took them multiple days to do was a pointer to me that they were not ready at all and by asking, I was forcing them into readyness. I think I have done a good deed there.

dns

Before doing anything else, I made sure that our DNS servers are accessible over IPv6 and that IPv6 glue records existed for them.

We’re using PowerDNS, so actually supporting IPv6 connectivity was trivial, though there was a bit of tweaking needed to tell it about what interface to use for outgoing zone transfers.

Creating the glue records for the DNS servers was trivial too – nic.ch has a nice UI to handle the glue records. I’ve already had IPv4 glue records, so all I had to do was to add the V6 addresses.

web properties

Making our web properties available over IPv6 was trivial. All I had to do was to assign an IPv6 address to our frontend load balancer.

I did not change any of the backend network though. That’s still running IPv4 and it will probably for a long time to come as I have already carefully allocated addresses, configured DHCP and I even know IP addresses by heart. No need to change this.

I had to update the web application itself a tiny bit in order to copy with a REMOTE_ADDR that didn’t quite look the same any more though:

  • there were places where we are putting the remote address into the database. Thankfully, we are using PostgreSQL whose native inet type (it even supports handy type specific operators) supports IPv6 since practically forever. If you’re using another database and you’re sotoring the address in a VARCHAR, be prepared to lengthen the columns as IPv6 addreses are much longer.
  • There were some places where we were using CIDR matching for some privileged API calls we are allowing from the internal network. Of course, because I haven’t changed the internal network, no code change was strictly needed, but I have updated the code (and unit tests) to deal with IPv6 too.

The last step was to add the AAAA record for our load balancer.

From that moment on, our web properties were available via IPv6 and while there’s not a lot of traffic from Switzerland, over in Germany, about 30% of all requests are happening over IPv6.

email

Of the bunch, dealing with email was the most complicated step. Not so much for enabling IPv6 support in the MTA as that was supported since forever (we’re using Exim (warning: very old post)).

The difficulty lied in getting everything else to work smoothly though – mostly in regards to SPAM filtering:

  • Many RBLs don’t support IPv6, so I had to make sure we weren’t accidentally treating all mail delivered to us over IPv6 as spam.
  • If you want to have any chance at your mail being accepted by remote parties, then you must have a valid PTR record for your mail server. This meant getting reverse DNS to work right for IPv6.
  • Of course you also need to update the SPF record now that you are sending email over IPv6.

PTR record

The PTR record was actually the crux of the matter.

In IPv4, it’s inpractical or even impossible to get a reverse delegation for anything smaller than a /24, because of the way how reverse lookup works in DNS. There was RFC 2317 but that was just too much hassle for most ISPs to implement.

So the process normally was to let the ISP handle the few PTR records you wanted.

This changes with IPv6 in two ways: As the allocation is mostly fixed to a /64 or larger and because there are so many IPv6 addreses to allow splitting networks at byte boundaries without being stingy, it is trivially easy to do proper reverse delegation to customers.

And because there are so many addresses available for a customer (a /64 allocation is enough addresses to cover 2^32 whole internets), reverse delegation is the only way to make good use of all these addresses.

This is where I hit my next roadblock with the ISP though.

They were not at all set up for proper reverse delegation – the support ticket I have opened in November of 2014 took over 6 months to finally get closed in May of this year.

As an aside: This was a professional colocation provider for business customers that was, in 2014, not prepared to even just hand out IPv6 addresses and who required 6 months to get reverse delegation to work.

My awesome ISP was handing out IPv6 addresses since the late 90ies and they offer reverse delgation for free to anybody who asks. As a matter of fact, it was them to ask me whether I wanted a reverse delegation last year when I signed up with them.

Of course I said yes :-)

This brought me to the paradoxical situation of having a fully working IPv6 setup at home while I had to wait for 6 months for my commercial business ISP to get there.

it’s done now

So after spending about 2 days learning about IPv6, after spending about 2 days updating our application, after spending one day convincing our ISP to give us the IPv6 allocation they promised in the contract and after waiting 6 months for the reverse delegation, I can finally say that all our services are now accessible via IPv6.

Here are the headers of the very first Email we’ve transmitted via IPv6

And here’s the achievement badge I waited so patiently (because of the PTR delegation) to finally earn 🎉

IPv6 Certification Badge for pilif

I can’t wait for the accompanying T-Shirt to arrive 😃

Why we need IPv6

As we are running out of IPv4 network addresses (and yes, we are), there’s only two possible future scenarios and one of the two, most people are not going to like at all.

As IP addresses get more and more scarce, things will start to suck for both clients and content providers.

As more and more clients connect, carrier grade NAT will become the norm. NAT already sucks, but at least you get to control it and using NAT-PMP or UPnP, applications in your network get some control over being able to accept incoming connections.

Carrier Grade NAT is different. That’s NAT being done on the ISPs end, so you don’t get to open ports at all. This will affect gaming performance, it will affect your ability to use VoIP clients and of course file sharing clients.

For content providers on the other hand, it will become more and more difficult to get the public IP addresses needed for them to be able to actually provide content.

Back in the day, if you wanted to launch a service, you would just do it. No need to ask anybody for permission. But in the future, as addresses become scarce and controlled by big ISPs which are also acting as content provider, the ISPs become the gatekeepers for new services.

Either you do something they like you to be doing, or you don’t get an address: As there will be way more content providers fighing over addresses than there will be addresses available, it’s easy for them to be picky.

Old companies who still have addresses of course are not affected, but competing against them will become hard or even impossible.

More power to the ISPs and no competition for existing content providing services both are very good things for players already in the game, so that’s certainly a possible future they are looking forward to.

If we want to prevent this possible future from becoming reality, we need a way out. IPv4 is draining up. IPv6 exists for a long time, but people are reluctant to upgrade their infrastructure.

It’s a vicious cycle: People don’t upgrade their infrastructure to IPv6 because nobody is using IPv6 and nobody is using IPv6 because there’s nothing to be gained from using IPv6.

If we want to keep the internet as an open medium, we need to break the cycle. Everybody needs to work together to provide services over IPv6, to the point of even offering services over IPv6 exclusively.

Only then can we start to build pressure for ISPs to support IPv6 on their end.

If you are a content provider, ask your ISP for IPv6 support and start offering your content over IPv6. If you are an end user, pressure your ISP to offer IPv6 connectivity.

Knowing this, even one year ago, after getting motivated by my awesome ISP who offered IPv6 connectivity ever since, I started to get our commercial infrastructure up to speed.

Read on to learn how that went.

The Future of the JRPG genre

After an underwhelming false start with Xenoblade Chronicles back when the game came out, the re-release on the 3DS made my give it another try and now that I’m nearly through with the game (just beat the 3rd last main quest boss), I feel compelled to write my first game review after many years of non-gaming content here.

«Review» might not be the entirely correct term though as this article is about to explain why I personally believe Xenoblade to be one of the best instances of the JPRG genre and might actually be very high up there in my list of all-time favorite games.

But first, let’s talk about what’s not so good at the game and why I nearly have missed this awesome game: If I had to list the shortcomings in this masterpiece, it would be the UI design of the side-questing system and the very, very slow start of the story.

First the story: After maybe an hour of play time, the player is inclined to think to have been thrown into the usual revenge plot, this time about a fight against machine based life-forms, but a simple revenge-plot none the less. Also, to be honest, it’s not even a really interesting revenge-plot. It feels predictable and not at all like what we’re usually used to from the genre.

Once you reach the half-time mark of the game, the subtle hints that the game’s dropping on you before that start to become less and less subtle, revealing to the player that they got it all wrong.

The mission of the game changes completely to the point of even completely changing whom you are fighting against and turning around many things you’ve taken for granted for the first half.

This is some of the most impressive story-development I’ve seen so far and also came as a complete surprise to me.

So what felt like the biggest shortcoming of the game (lackluster story) suddenly turned into one of its strongest points.

«Other games of the genre also did this» you might think as you compare this to Final Fantasy XII, but where that game unfortunately never really takes off nor adds any bigger plot-twists, the thing that Xenoblade does after the half-time marker is simply mind-blowing to the point of me refusing to post any spoilers even though the game is quite old by now.

So we have a game that gets amazing after 20-40 hours (depending on how you deal with the side-quests). What’s holding us over until then?

The answer to that question is the reason why I think that Xenoblade is one of the best JRPGs so far: What’s holding us over in the first 40 hours of the game is, you know, gameplay.

The battle system feels like it has been lifted from current MMORPGs (I’m mostly referring to World of Warcraft here as that’s the one I know best), though while it has been scaled down in sheer amount of skills, the abilities themselves have been much better balanced between the characters, which of course is possible in a single-player game.

The game’s affinity system also greatly incentivises the player to switch their party around as they play the game. This works really well when you consider the different play styles offered by the various characters. A tank plays differently from DPS which plays differently from the (unfortunately only one) healer.

But even between members of the same class there are differences in play style leading to a huge variety for players.

This is the first JRPG where I’m actually looking forward to combat – it’s that entertaining.

While the combat sometimes can be a bit difficult, especially because randomness still plays a huge part, it’s refreshing to see that the game doesn’t punish you at all for failing: If you die you just respawn at the last waypoint and usually there’s one of these right in front of the boss.

Even better, normally, the fight just starts again, skipping all introductory cutscenes. And even if there still is some cut-scenes not skipped automatically: The game always allows cutscenes to be skipped.

This makes a lot of sense, because combat is actually so much fun that there’s considerable replay-value to the game which gets much enforced by skippable cutscenes, though some of them you would never ever in your life want to skip – they are so good (you know which ones I’m referring to).

Combat is only one half of the gameplay, the other is exploration: The world of the game is huge and for the first time ever in a JPRG, the simple rule of «you can see it, you can go there» applies. For the first time ever, the huge world is yours to explore and to enjoy.

Never have I seen such variety in locations, especially, again, in the second half of the game which I really don’t want to spoil here.

Which brings us to the side-quests: Imagine that you have a quest-log like you’re used to from MMORPGs with about the same style of quests: Find this item, kill these normal mobs, kill that elite mob, talk to that other guy – you know the drill.

The non-unique and somewhat random dialog lines between the characters as they accept these side-quests break the immersion a bit.

But the one big thing that’s really annoying about the side-quests is discoverability: As a player you often have no idea where to go due to the vague quest texts and, worse, many (most) quests are hidden and only become available after you trigger some event or you talk to the correct (seemingly unrelated) NPC.

While I can understand the former issue (vague quest descriptions) from a game-play perspective, the latter is inexcusable, especially as the leveling curve of the game and the affinity system both really are designed around you actually doing these side-quests.

It’s unfair and annoying that playing hide-and seek for hours is basically a fixed requirement to having a chance at beating the game. This feels like a useless prolonging of the existing game for no reason but to, you know, prolong the game.

Thankfully though, by now, the Wiki exists, so whether you’re on the Wii or the 3DS, just have an iPad or Laptop close to you as you do the side-questy parts of the game.

Once you’re willing to live with this issue, then the absolutely amazing gameplay comes into effect again: Because exploration is so much fun, because the battle system is so much fun, then suddenly the side-quests become fun too, once you remove the annoying hide-and-seek aspect.

After all, it’s the perfect excuse to do more of what you enjoy the most: Playing the game.

This is why I strongly believe that this game would have been so much better with a more modern quest-log system: Don’t hide (most of the) quests! Be precise in explaining where to find stuff! You don’t have to artificially prolong the game: Even when you know where to go (I did thanks to the Wiki), there’s still more than 100 hours of entertainment there to be had.

The last thing about quests: Some of the quests require you to find rare items which to get you have a random chance by collecting «item orbs» spread all over the map. This is of course another nice way to encourage exploration.

But I see no reason why the drop rate must be random, especially as respawning the item orbs either requires you to wait 10 to 30 minutes or, saving and reloading the game.

If you want to encourage exploration, hide the orbs! There’s so much content in this game that aritifically prolonging it with annyoing saving and reloading escapades is completely unnecessary.

At least, the amount of grinding required isn’t so bad to the point of being absolutely bearable for me and I have nearly zero patience for grinding.

Don’t get me wrong though: Yes, these artificial time-sinks were annoying (and frankly 100% unneeded), but because the actual gameplay is so much fun, I didn’t really mind them that much.

Finally, there are some technical issues which I don’t really mind that much however: Faces of characters look flat and blurry which is very noticable in the cut-scenes which are all rendered by the engine itself (which is a very good thing).

Especially on the 3DS the low resolution of the game is felt badly (the 3DS is much worse than the Wii to the point of objects sometimes being invisible) and there’s some objects popping into view at times. This is mostly a limitation of the hardware which just doesn’t play well with the huge open world, so I can totally live with it. It only minimally affects my immersion into the game.

If you ask me what is the preferred platform to play this on, I would point at the Wii version though, of course, it’ll be very hard to get the game at this point in time (no. you can’t have my copy).

the good

So after all of this, here’s a list of the unique features of this game it has over all other members of its genre:

  • Huge world that can be explored completely. No narrow hallways but just huge open maps.
  • Absolutely amazing battle system that goes far beyond of the usual «select some action from this text-based menu»
  • Skippable cutscenes which together with the battle system make for a high replayability
  • Many different playable characters with different play styles
  • Great music by the god-like Mr. Mitsuda
  • A very, very interesting story once you reach the mid-point of the game
  • Very believable characters and very good character development
  • Some of the best cutscene direction I have ever seen in my life – again, mostly after the half-time mark (you people who played the game know which particular one I’m talking about – still sends shivers down my spine).

My wishes for the future

The game is nearly perfect in my opinion, but there are two things I think would be great to be fixed in the successor or any other games taking their inspiration from Xenoblade:

First, please fix the quest log and bring it to the current decade of what we’re used to from MMORPGs (where you lifted the quest design off to begin with): Show us where to get the quests, show us where to do them.

Second, and this one is even bigger in my opinion: Please be more considerate in how you represent women in the game. Yes, the most bad-ass characters in the game are women (again, I can’t spoil anything here). Yes, there’s a lot of depth to the characters of women in this game and they are certainly not just there for show but are actually instrumental to the overall story development (again, second part).

But why does most of the equipment for the healer in the game have to be practically underwear? Do you really need to spend CPU resources on (overblown) breast physics when you render everybodies faces blurry and flat?

Wouldn’t it be much better for the story and the immersion if the faces looked better at the cost of some (overblown) jiggling?

Do you really have to constantly show close-ups of way too big breasts of one party member? This is frankly distracting from what is going on in the game.

I don’t care about cultural differences: You managed to design very believable and bad-ass women into your game. Why do you have to diminish this by turning them into a piece of furniture to look at? They absolutely stand on their own with their abilities and their character progression.

It is the year 2015. We can do better than this (though, of course, the world was different in 2010 when the game initially came out).

Conclusion

All of that aside: Because of the amazing game play, because of the mind-blowing story, because of the mind-blowing custscene-direction and because of the huge world that’s all but narrow passages, I love this game more than many others.

I think that this is the first time that the JRPG game really has moved forward in about a decade and I would definitely like to see more games ripping off the good aspects of Xenoblade (well – basically everything).

As such I’m very much looking forward for the games successor to become available here in Europe (it has just come out in Japan and my Japanese still is practically non-existent) and I know for a fact that I’m going to play it a lot, especially as I now know to be patience with the side-quests.

Geek heaven

If I had to make a list of attributes I would like the ISP of my dream to
have, then, I could write quite the list:

  • I would really like to have native IPv6 support. Yes. IPv4 will be
    sufficient for a very long time, but unless pepole start having access to
    IPv6, it’ll never see the wide deployment it needs if we want the internet
    to continue to grow. An internet where addresses are only available to
    people with a lot of money is not an internet we all want to be subjected to
    (see my post «asking for permission»)
  • I would want my ISP to accept or even support network neutrality. For this
    to be possible, the ISP of my dreams would need to be nothing but an ISP so
    their motivations (provide better service) align with mine (getting better
    service). ISPs who also sell content have all the motivation to provide
    crappy Internet service in order to better sell their (higher-margin)
    content.
  • If I have technical issues, I want to be treated as somebody who obviously
    has a certain level of technical knowledge. I’m by no means an expert in
    networking technology, but I do know about powering it off and on again. If
    I have to say «shibboleet» to get to a real
    technicial, so be it, but if that’s not needed, that’s even better.
  • The networking technology involved in getting me the connectivity I want
    should be widely available and thus easily replacable if something breaks.
  • The networking technology involved should be as simple as possible: The
    more complex the hardware involved, the more stuff can break, especially
    when you combine cost-pressure for end-users with the need for high
    complexity.
  • The network equipment I’m installing at my home and which has thus access
    to my LAN needs to be equipment I own and I control fully. I do not accept
    leased equipment to which I do not have full access to.
  • And last but not least, I would really like to have as much bandwidth as possible

I’m sure I’m not alone with these wishes, even though, for «normal people»
they might seem strange.

But honestly: They just don’t know it, but they too have the same interests.
Nobody wants an internet that works like TV where you pay for access to a
curated small list of “approved” sites (see network neutrality and IPv6
support).

Nobody wants to get up and reboot their modem here and then because it
crashed. Nobody wants to be charged with downloading illegal content
because their Wifi equipment was suddenly repurposed as an open access point
for other customers of an ISP.

Most of the wishes I list above are the basis needed for these horror
scenarios never coming to pass, however unlikely the might seem now (though
getting up and rebooting the modem/router is something we already have to
deal with today).

So yes. While it’s getting rarer and rarer to get all the points of my list
fulfilled, to the point where I though this to be impossible to get all of
it, I’m happy to say that here in Switzerland, there is at least one ISP that
does all of this and more.

I’m talking about Init7 and especially their
awesome FTTH offering Fiber7 which very recently
became available in my area.

Let’s deal with the technology aspect first as this really isn’t the
important point of this post: What you get from them is pure 1Gbit/s
Ethernet. Yes, they do sell you a router box if you want one, but you can
just as well just get a simple media converter, or just an SFP module to plug
into any (managed) switch (with SFP port).

If you have your own routing equipment, be it a linux router like my
shion or be it any
Wifi Router, there’s no need to add any kind of additional complexity to
your setup.

No additional component that can crash, no software running in your home to
which you don’t have your password to and certainly no sneakily opened
public WLANs (I’m looking at you,
cablecom).

Of course you get native IPv6 (a /48 which incidentally is room for
281474976710656 whole internets in your apartment) too.

But what’s really remarkable about Init7 isn’t the technical aspect (though,
again, it’s bloody amazing), but everything else:

  • Init7 was one of the first ISPs in Switzerland to offer IPv6 to end users.
  • Init7 doesn’t just support network neutrality.
    They actively fight for it
  • They explicitly state
    that they are not selling content and they don’t intend to start doing so. They are just an ISP and as such their motivations totally align with mine.

There are a lot of geeky soft factors too:

  • Their press releases are written in Open Office (check the PDF properties
    of this one
    for example)
  • I got an email from a technical person on their end that was written using
    f’ing Claws Mail on Linux
  • Judging from the Recieved headers of their Email, they are using IPv6 in
    their internal LAN – down to the desktop workstations. And related to that:
  • The machines in their LAN respond to ICMPv6 pings which is utterly crazy
    cool. Yes. They are firewalled (cough I had to try. Sorry.), but they let
    ICMP through. For the not as technical readers here: This is as good an
    internet citizen as you will ever see and it’s extremely unexpected these
    days.

If you are a geek like me and if your ideals align with the ones I listed
above, there is no question: You have to support them. If you can have their
Fiber offering in your area, this is a no-brainer. You can’t get synchronous
1GBit/s for CHF 64ish per month anywhere else and even if you did, it
wouldn’t be plain Ethernet either.

If you can’t have their fiber offering, it’s still worth considering their
other offers. They do have some DSL based plans which of course are
technically inferior to plain ethernet over fiber, but you would still
support one of the few remaining pure ISPs.

It doesn’t have to be Init7 either. For all I know there are many others,
maybe even here in Switzerland. Init7 is what I decided to go with initially
because of the Gbit, but the more I leared about their philosophy, the less
important the bandwith got.

We need to support companies like these because companies like these are
what ensures that the internet of the future will be as awesome as the
internet is today.

Thoughts on IPv6

A few months ago, the awesome provider Init7 has released their
awesome FTTH offering Fiber7 which provides
synchronous 1GBit/s access for a very fair price. Actually, they are by
far the cheapest provider for this kind of bandwith.

Only cablecom comes close at matching them bandwidth wise with their 250Mbits
package, but that’s 4 times less bandwith for nearly double the price. Init7
also is one of the only providers who officially states that
their triple-play strategy is that they don’t do it. Huge-ass kudos for
that.

Also, their technical support is using Claws Mail on GNU/Linux – to give you
some indication of the geek-heaven you get when signing up with them.

But what’s really exciting about Init7 is their support for IPv6. In-fact,
Init7 was one of the first (if not the first) providers to offer IPv6 for
end users. Also, we’re talking about a real, non-tunneled, no strings attached
plain /48.

In case that doesn’t ring a bell, a /48 will allow for 216 networks
consisting of 264 hosts each. Yes. That’s that many hosts.

In eager anticipation of getting this at home natively (of course I ordered
Fiber7 the moment I could at my place), I decided to play with IPv6 as far as
I could with my current provider, which apparently lives in the stone-age and
still doesn’t provide native v6 support.

After getting abysmal pings using 6to4 about a year ago, this time I decided
to go with tunnelbroker which these days also
provides a nice dyndns-alike API for updating the public tunnel endpoint.

Let me tell you: Setting this up is trivial.

Tunnelbroker provides you with all the information you need for your tunnel
and with the prefix of the /64 you get from them and setting up for your own
network is trivial using radvd.

The only thing that’s different from your old v4 config: All your hosts will
immediately be accessible from the public internet, so you might want to
configure a firewall from the get-go – but see later for some thoughts in that
matter.

But this isn’t any different from the NAT solutions we have currently. Instead
of configuring port forwarding, you just open ports on your router, but the
process is more or less the same.

If you need direct connectivity however, you can now have it. No strings attached.

So far, I’ve used devices running iOS 7 and 8, Mac OS X 10.9 and 10.10,
Windows XP, 7 and 8 and none of them had any trouble reaching the v6 internet.
Also, I would argue that configuring radvd is easier than configuring DHCP.
There’s less thought involved for assigning addresses because
autoconfiguration will just deal with that.

For me, I had to adjust how I’m thinking about my network for a bit and I’m
posting here in order to explain what change you’ll get with v6 and how some
paradigms change. Once you’ve accepted these changes, using v6 is trivial and
totally something you can get used to.

  • Multi-homing (multiple adresses per interface) was something you’ve rarely
    done in v4. Now in v6, you do that all the time. Your OSes go as far as to
    grab a new random one every few connections in order to provide a means of
    privacy.
  • The addresses are so long and hex-y – you probably will never remember them.
    But that’s ok. In general, there are much fewer cases where you worry about
    the address.

    • Because of multi-homing every machine has a guaranteed static address
      (built from the MAC address of the interface) by default, so there’s no
      need to statically assign addresses in many cases.
    • If you want to assign static addresses, just pick any in your /64.
      Unless you manually hand out the same address to two machines,
      autoconfiguration will make sure no two machines pick the same address.
      In order to remember them, feel free to use cute names – finally you got
      some letters and leetspeak to play with.
    • To assign a static address, just do it on the host in question. Again,
      autoconfig will make sure no other machine gets the same address.
  • And with Zeroconf (avahi / bonjour), you have fewer and fewer oportunities
    to deal with anything that’s not a host-name anyways.
  • You will need a firewall because suddenly all your machines will be
    accessible for the whole internet. You might get away with just the local
    personal firewall, but you probably should have one on your gateway.
  • While that sounds like higher complexity, I would argue that the complexity
    is lower because if you were a responsible sysadmin, you were dealing with
    both NAT and a firewall whereas with v6, a firewall is all you need.
  • Tools like nat-pmp or upnp don’t support v6 yet as far as I can see, so
    applications in the trusted network can’t yet punch holes in the firewall
    (what is the equivalent thing to forwarding ports in the v4 days).

Overall, getting v6 running is really simple and once you adjust your mindset
a bit, while stuff is unusual and taking some getting-used-to, I really don’t
see v6 as being more complicated. Quite to the contrary actually.

As I’m thinking about firewalls and opening ports, actually, as hosts get
wiser about v6, you actually really might get away without a strict firewall
as hosts could grab a new random v6 address for every connection they want to
use and then they would just bind their servers to that address.

Services binding to all addresses would never bind to these temporary addresses.

That way none of the services brought up by default (you know – all those
ports open on your machine when it runs) would be reachable from the outside.
What would be reachable is the temporary addresses grabbed by specific
services running on your machine.

Yes. An attacker could port-scan your /64 and try to find the non-temporary
address, but keep in mind that finding that one address out of 264
addresses would mean that you have to port-scan 4 billion traditional v4
internets per attack target (good luck) or randomly guessing with an average
chance of 1:263 (also good luck).

Even then a personal firewall could block all unsolicited packets from
non-local prefixes to provide even more security.

As such, we really might get away without actually needing a firewall at the
gateway to begin with which will actually go great lengths at providing the
ubiquitous configuration-free p2p connectivity that would be ever-so-cool and
which we have lost over the last few decades.

Me personally, I’m really happy to see how simple v6 actually is to get
implemented and I’m really looking forward to my very own native /48 which I’m
probably going to get somehwere in September/October-ish.

Until then, I’ll gladly play with my tunneled /64 (for now still firewalled,
but I’ll investigate into how OS X and Windows deal with the temporary
addresses they use which might allow me to actually turn the firewall off).

pdo_pgsql improvements

Last autumn, I was talking about how I would like to see pdo_pgsql for PHP to be improved.

Over the last few days I had time to seriously start looking into making sure I get my wish. Even though my C is very rusty and I have next to no experience in dealing with the PHP/Zend API, I made quite a bit of progress over the last few days.

First, JSON support

json

If you have the json extension enabled in your PHP install (it’s enabled by default), then any column of data type json will be automatically parsed and returned to you as an array.

No need to constantly repeat yourself with json_parse(). This works, of course, with directly selected json columns or with any expression that returns json (like array_to_json or the direct typecast shown in the screenshot).

This is off by default and can be enabled on a per-connection or a per-statement level as to not break backwards compatibility (I’ll need it off until I get a chance to clean up PopScan for example).

Next, array support:

array

Just like with JSON, this will automatically turn any array expression (of the built-in array types) into an array to use from PHP.

As I’m writing this blog entry here, this only works for text[] and it’s always enabled.

Once I have an elegant way to deal with the various types of arrays and convert them into the correct PHP types, I’ll work on making this turnoffable (technical term) too.

I’ll probably combine this and the automatic JSON parsing into just one setting which will include various extended data types both Postgres and PHP know about.

Once I’ve done that, I’ll look into more points on my wishlist (better error reporting with 9.3 and later and a way to quote identifiers comes to mind) and then I’ll probably try to write a proper RFC and propose this for inclusion into PHP itself (though don’t get your hopes up – they are a conservative bunch).

If you want to follow along with my work, have a look at my pdo_pgsql-improvements branch on github (tracks to PHP-5.5)

Ansible

In the summer of 2012, I had the great oportunity to clean up our hosting
infrastructure. Instead of running many differently configured VMs, mostly one
per customer, we started building a real redundant infrastructure with two
really beefy physical database machines (yay) and quite many (22) virtual
machines for caching, web app servers, file servers and so on.

All components are fully redundant, every box can fail without anybody really
needing to do anything (one exception is the database – that’s also redundant,
but we fail over manually due to the huge cost in time to failback).

Of course you don’t manage ~20 machines manually any more: Aside of the fact
that it would be really painful to do for those that have to be configured in an
identical way (the app servers come to mind), you also want to be able to
quickly bring a new box online which means you don’t have time to manually go
through the hassle of configuring it.

So, In the summer of 2012, when we started working on this, we decided to go
with puppet. We also considered Chef but their server
was really complicated to set up and install and there was zero incentive for
them to improve because that would, after all, disincentivse people from
becoming customers of their hosted solutions (the joys of open-core).

Puppet is also commerically backed, but everything they do is available as open
source and their approach for the central server is much more «batteries
included» than what Chef has provided.

And finally, after playing around a bit with both Chef and puppet, we noticed
that puppet was way more bitchy and less tolerant of quick hacks around issues
which felt like a good thing for people dabbling with HA configuration of a
multi machine cluster for the first time.

Fast forward one year: Last autumn I found out about
ansible (linking to their github page –
their website reads like a competition in buzzword-bingo) and after reading
their documentation, I immediately was convinced:

  • No need to install an agent on managed machines
  • Trivial to bootstrap machines (due to above point)
  • Contributors don’t need to sign a CLA (thank you so much, ansibleworks!)
  • No need to manually define dependencies of tasks: Tasks are run requentially
  • Built-in support for cowsay by default
  • Many often-used modules included by default, no hunting for, say, a sysctl
    module on github
  • Very nice support for rolling updates
  • Also providing a means to quickly do one-off tasks
  • Very easy to make configuration entries based on the host inventory (which requires puppetdb and an external database in the case of puppet)

Because ansible connects to each machine individually via SSH, running it
against a full cluster of machines is going to take a bit longer than with
puppet, but our cluster is small, so that wasn’t that much of a deterrent.

So last Sunday evening I started working on porting our configuration over from
puppet to Ansible and after getting used to the YAML syntax of the playbooks, I
made very quick progress.

progress

Again, I’d like to point out the excellent, built-in, on-by-default support for
cowsay as one of the killer-features that made me seriously consider starting
the porting effort.

Unfortunately though, after a very promising start, I had to come to the
conclusion that we will be sticking with puppet for the time being because
there’s one single feature that Ansible doesn’t have and that I really, really
want a configuration management system to have:

I’ts not possible in Ansible to tell it to keep a directory clean of files not
managed by Ansible in some way

There are, of course, workarounds, but they come at a price too high for me to
be willing to pay.

  • You could first clean a directory completely using a shell command, but this
    will lead to ansible detecting a change to that folder every time it runs which
    will cause server restarts, even when they are not needed.

  • You could do something like this stack overflow question
    but this has the disadvantage that it forces you into a configuration file
    specific playbook design instead of a role specific one.

What I mean is that using the second workaround, you can only have one playbook
touching that folder. But imagine for example a case where you want to work with
/etc/sysctl.d: A generic role would put some stuff there, but then your
firewall role might put more stuff there (to enable ip forwarding) and your
database role might want to add other stuff (like tweaking shmmax and shmall,
though that’s thankfully not needed any more in current Postgres releases).

So suddenly your /etc/sysctl.d role needs to know about firewalls and
databases which totally violates the really nice separation of concerns between
roles. Instead of having a firewall and a database role both doing something to
/etc/sysctl.d, you know need a sysctl-role which does different things
depending on what other roles a machine has.

Or, of course, you just don’t care that stray files never get removed, but
honestly: Do you really want to live with the fact that your /etc/sysctl.d, or
worse, /etc/sudoers.d can contain files not managed by ansible and likely not
intended to be there? Both sysctl.d and sudoers.d are more than capable of doing
immense damage to your boxes and this sneakily behind the watching eye of your
configuration management system?

For me that’s inacceptable.

So despite all the nice advantages (like cowsay), this one feature is something
that I really need and can’t have right now and which, thus, forces me to stay
away from Ansible for now.

It’s a shame.

Some people tell me that implementing my feature would require puppet’s feature
of building a full state of a machine before doing anything (which is error-
prone and frustrating for users at times), but that’s not really true.

If ansible modules could talk to each other – maybe loosly coupled by firing
some events as they do stuff, you could just name the task that makes sure the
directory exists first and then have that task register some kind of event
handler to be notified as other tasks touch the directory.

Then, at the end, remove everything you didn’t get an event for.

Yes. This would probably (I don’t know how Ansible is implemented internally)
mess with the decouplling of modules a bit, but it would be so far removed
from re-implementing puppet.

Which is why I’m posting this here – maybe, just maybe, somebody reads my plight
and can bring up a discussion and maybe even a solution for this. Trust me: I’d
so much rather use Ansible than puppet, it’s crazy, but I also want to make sure
that no stray file in /etc/sysctl.d will bring down a machine.

Yeah. This is probably the most words I’ve ever used for a feature request, but
this one is really, really important for me which is why I’m so passionate about
this. Ansible got so f’ing much right. It’s such a shame to still be left
unable to really use it.

Is this a case of xkcd1172? Maybe, but to me, my
request seems reasonable. It’s not? Enlighten me! It is? Great! Let’s work on
fixing this.