Fiber7 TV behind PFSense

As I’ve stated previously, I’m subscribed to what is probably the coolest ISP on earth. Between the full symmetric Gbit/s, their stance on network neutrality, their IPv6 support and their awesome support even for advanced things like setting up an IPv6 reverse DNS delegation(!), there’s nothing you could ever wish for from an ISP.

For some time now, they have also provided an IPTV solution as an additional subscription called tv7.

As somebody who last watched live tv around 20 years ago, I wasn’t really interested to subscribe to that. However, contrary to many other IPTV solutions what’s special about the Fiber7 solution is that they are using IP multicast to deliver the unaltered DVB frames to their users.

For people interested in TV, this is great because it’s, for all intents and purposes, lag free as the data is broadcast directly through their network where interested clients can just pick it up (of course there will be some <1ms lag for the data to move through their network plus some additional <1ms lag as your router forwards the packets to your internal network).

As I never dealt with IP multicast, this was an interesting experiment for me, and when they released their initial offering, they provided a test-stream to see whether your infrastructure was multicast ready or not.

Back then, I never got it to work behind my PFSense setup but as I wasn’t interested in TV, I never bothered spending time on this, though it did hurt my pride.

Fast forward to about three weeks ago where I made a comment on twitter about that pride being hurt to the CEO of fiber7. He informed me that the test stream was down, but then he also sent me a DM to ask me whether I was interested in trying out their tv7 offering, including the beta version of their app for the AppleTV.

That was one evil way to nerd-snipe me, so naturally, I told him that, yes, I would be interested, but that I wasn’t really ever going to use it aside of just getting it to work, because live TV just doesn’t interest me.

Despite the fact that it was past 10pm, he sent me another DM, telling me that he has enabled tv7 for my account.

The rest of the night I spent experimenting with IGMP Proxy and the PFSense firewall to some varying success, but on the next day I was finally successful

You might notice that this is a screenshot of VLC. That’s no coincidence: While Fiber7 officially only supports the AppleTV app, they also offer links on a support page of theirs to m3u and xspf playlists that can be used by advanced users (which is another case of Fiber7 being awesome), so while debugging to make this work, I definitely preferred to using VLC which had a proper debug log.

After I got it to work, I also found a bug in the Beta version of the Fiber7 app where it would never unsubscribe from a multicast group, causing the traffic to my LAN to increase whenever I would switch channels in the app. The traffic wouldn’t decrease even if the AppleTV went to sleep – only a reboot would help.

I’ve reported this to Fiber7 and within a day or two, a new release was pushed to TestFlight in order to fix the issue.

Since this little adventure happened, Fiber7 has changed their offering: Now every Fiber7 account gets free access to tv7 which will probably broaden the possible audience quite a bit.

Which brings me to the second point of this post: To show you the configuration needed if you’re using a PFSense based gateway and you want to make use of tv7.

First, you have to enable the IGMP proxy:

Screen Shot 2018-05-22 at 16.31.15.png

For the LAN interface, please type in the network address and netmask of your internal IPv4 LAN.

What IGMP Proxy does is to listen to clients in your LAN joining to a multicast group and then joining on their behalf on the upstream interface. It will then forward all traffic received on the upstream aimed at the group to the group on the downstream interface. This is where the additional small bit of lag is added, but this is the only way to have multicast cross routing barriers.

This is also mostly done on your routers CPU, but at the 20MBit/s a stream consumes, this shouldn’t be a problem on more or less current hardware.

Anyways – if you want to actually watch TV, you’re not done yet because even though this service is now running, the built-in firewall will drop any packets related to multicast joining and all actual multicast packets containing the video frames.

So the next step is to update the firewall:

Create the following rules for your WAN interface:

Screen Shot 2018-05-22 at 16.39.07.png

You will notice that little gear icon next to the rule. What that means is that additional options are enabled. The extra option you need to enable is this one here:

Screen Shot 2018-05-22 at 16.41.31.png

I don’t really like the second of the two rules. In principle, you only need to allow a single IP: The one of your upstream gateway. But that might change whenever your IPv4 address changes and I don’t think you will want to manually update your firewall rule every time.

Instead, I’m allowing all IGMP traffic from the WAN net, trusting Fiber7 to not leak other subscriber’s IGMP traffic to my network.

Unfortunately, you’re still not quite done.

While this configures the rules for the WAN interface, the default “pass all” rule on the LAN interface will still drop all video packets because the above “Allow IP options” checkbox is off by default for the default pass all rule.

You have to update that too on the “LAN” interface:

Screen Shot 2018-05-22 at 16.46.47.png

And that’s all.

The network I’m listing there, 77.109.128.0/19 is not documented officially. Fiber7 might change that at any time at which point your nice setup will stop working and you’ll have to update the IGMP Proxy and Firewall configuration.

In my case, I’ve determined the network address by running

/usr/local/sbin/igmpproxy -d -vvvv /var/etc/igmpproxy.conf

and checking out the error message where igmpproxy was not allowing traffic to an unknown network. I’ve then looked up the network of the address using whois and updated my config accordingly.

Why I recommend against JWT

Json Web Tokens are all the rage lately. They are lauded as being a stateless alternative to server-side cookies and as the perfect way to use authentication in your single-page app and some people also sell them as a work around for the EU cookie policy because, you know, they work without cookies too.

If you ask me though, I would always recommend against the use of JWT to solve your problem.

Let me give you a few arguments to debunk, from worse to better:

Debunking arguments

It requires no cookies

General “best” practice stores JWT in the browsers local storage and then sends that off to the server in all authenticated API calls.

This is no different from a traditional cookie with the exception that transmission to the server isn’t done automatically by the browsers (which a cookie would be) and that it is significantly less secure than a cookie: As there is no way to set a value in local storage outside of JavaScript, there consequently is no feature equivalent to cookies’ httponly. This means that XSS vulnerabilities in your frontend now give an attacker access to the JWT token.

Worse, as people often use JWT for both a short-lived and a refresh token, this means that any XSS vulnerability now gives the attacker to a valid refresh token that can be used to create new session tokens at-will, even when your session has expired, in the process completely invalidating all the benefits of having separate refresh and access tokens.

“But at least I don’t need to display one of those EU cookie warnings” I hear you say. But did you know that the warning is only required for tracking cookies? Cookies that are required for the operation of your site (so a traditional session cookie) don’t require you to put up that warning in the first place.

It’s stateless

This is another often used argument in favour of JWT: Because the server can put all the required state into them, there’s no need to store any thing on the server end, so you can load-balance incoming requests to whatever app server you want and you don’t need any central store for session state.

In general, that’s true, but it becomes an issue once you need to revoke or refresh tokens.

JWT is often used in conjunction with OAuth where the server issues a relatively short-lived access token and a longer-lived refresh token.

If a client wants to refresh its access token, it’s using its refresh token to do so. The server will validate that and then hand out a new access token.

But for security reasons, you don’t want that refresh token to be re-used (otherwise, a leaked refresh token could be used to gain access to the site for its whole validity period) and you probably also want to invalidate the previously used access token otherwise, if that has leaked, it could be used until its expiration date even though the legitimate client has already refreshed it.

So you need a means to black-list tokens.

Which means you’re back at keeping track of state because that’s the only way to do this. Either you black-list the whole binary representation of the token, or you put some unique ID in the token and then blacklist that (and compare after decoding the token), but what ever you do, you still need to keep track of that shared state.

And once you’re doing that, you lose all the perceived advantages of statelessness.

Worse: Because the server has to invalidate and blacklist both access and refresh token when a refresh happens, a connection failure during a refresh can leave a client without a valid token, forcing users to log in again.

In todays world of mostly mobile clients using the mobile phone network, this happens more often than you’d think. Especially as your access tokens should be relatively short-lived.

It’s better than rolling your own crypto

In general, yes, I agree with that argument. Anything is better than rolling your own crypto. But are you sure your library of choice has implemented the signature check and decryption correctly? Are you keeping up to date with security flaws in your library of choice (or its dependencies).

You know what is still better than using existing crypto? Using no crypto what so ever. If all you hand out to the client to keep is a completely random token and all you do is look up the data assigned to that token, then there’s no crypto anybody could get wrong.

A solution in search of a problem

So once all good arguments in favour of JWT have dissolved, you’re left with all their disadvantages:

  • By default, the JWT spec allows for insecure algorithms and key sizes. It’s up to you to chose safe parameters for your application
  • Doing JWT means you’re doing crypto and you’re decrypting potentially hostile data. Are you up to this additional complexity compared to a single primary key lookup?
  • JWTs contain quite a bit of metadata and other bookkeeping information. Transmitting this for every request is more expensive than just transmitting a single ID.
  • It’s brittle: Your application has to make sure to never make a request to the server without the token present. Every AJAX request your frontend makes needs to manually append the token and as the server has to blacklist both access and refresh tokens whenever they are used, you might accidentally end up without a valid token when the connection fails during refresh.

So are they really useless?

Even despite all these negative arguments, I think that JWT are great for one specific purpose and that’s authentication between different services in the backend if the various services can’t trust each other.

In such a case, you can use very short-lived tokens (with a lifetime measured in seconds at most) and you never have them leave your internal network. All the clients ever see is a traditional session-cookie (in case of a browser-based frontend) or a traditional OAuth access token.

This session cookie or access token is checked by frontend servers (which, yes, have to have access to some shared state, but this isn’t an unsolvable issue) which then issue the required short-lived JW tokens to talk to the various backend services.

Or you use them when you have two loosely coupled backend services who trust each other and need to talk to each other. There too, you can issue short-lived tokens (given you are aware of above described security issues).

In the case of short-lived tokens that never go to the user, you circumvent most of the issues outlined above: They can be truly stateless because thank to their short lifetime, you don’t ever need to blacklist them and they can be stored in a location that’s not exposed to possible XSS attacks against your frontend.

This just leaves the issue of the difficult-to-get-right crypto, but as you never accept tokens from untrusted sources, a whole class of possible attacks becomes impossible, so you might even get away with not updating on an too-regular basis.

So, please, when you are writing your next web API that uses any kind of authentication and you ask yourself “should I use JWT for this”, resist the temptation. Using plain opaque tokens is always better when you talk to an untrusted frontend.

Only when you are working on scaling our your application and splitting it out into multiple disconnected microservices and you need a way to pass credentials between them, then by all means go ahead and investigate JWT – it’ll surely be better than cobbling something up for yourself.

Technology driven life changes

Last year, when I talked about finally seeing the Apple Watch becoming mildly useful, I had no idea what kind of a ride I was going to be on.

Generally, I’m not really concerned about my health nor fitness, but last September, when my wonderful girlfriend left for a year of study in England, I decided that I finally had enough and I wanted to lose weight.

Having a year of near-zero social obligations would totally allow me to adjust my life-style in a way that’s conducive to weight loss, so here’s what I started doing:

  • During weekdays, I greatly reduced my calorie intake to basically just a salad and a piece of bread every day (you can pry my bread from my cold dead hands – it’s the one food I think I like the most).
  • Every day, no matter the weather, no matter the workload, no matter what, I was going to walk home after my work-day in the office, or on weekends, I would just take an equivalent walk.
  • Every day, I wanted to fill the “Activity” and the “Exercise” rings on my apple watch.

Now walking home sounds like nothing special, but I’m privileged to live in Zürich Switzerland, which means that I have very easy access to forests to walk in.

So commuting home by foot meant that I could walk at least 8 kilometres (4.9 miles), climbing 330m (1082 feet), most of it through the forest.

Every day, no matter whether it was way too hot, way too cold, whether it was raining, hailing or snowing, I would walk home. And every day I would be using my Apple Watch to track what I would generously call a “Workout” (even though it was just walking – but if you go from zero sports to that, I guess it’s ok to call it that).

From September to December I started gradually increasing the distance I walk.

This is the other great thing about Zürich: Once you reach the forest (which you do by walking 20 minutes in practically any direction), you can stay in the forest for hours and hours.

First I extended the 8km walk to 10km, then 12, then 14 and finally 19 (11 miles).

During that time, I kept tracking all the vital signs I could track between the Apple Watch and a Withings scale I bought 1-2 months into this.

My walks got faster and my heart rate at rest got lower and lower, from 80 to now 60.

Every evening after the walk, I would look at the achievements handed out by my Watch which is also why I’ve never updated my movement goals in the Activity app because getting all these badges, honestly, was a lot of fun and very motivational. Every evening I would get notified of increasing my movement streak, of doubling or even tripling my movement goal and of tripling or quadrupling my exercise goal.

screenshot of the activity app

Every morning I would weigh myself and bask in the glory of the ever falling graph painted by the (back then very good) iOS app that came with the scale. I would manage to lose a very consistent 2kg (4.4lbs) per week.

Every walk I would have the chance to experience some of natures beauty.

Crazy sunsets

sunset

Beautiful sunrises

sunrise

Enchanted forests

wintery forest

And frozen creeks

frozen creek

And in spring I could watch trees grow.

When I got home after up to three hours of walk, I was dead tired at around 10pm, meaning that for the first time in ages I would get more than enough sleep and I would still be able to get up between 6 and 7.

By mid of March, after 6 months of a very strict diet and walking home every day, I was done. I had lost 40kg (88.1 lbs).

Now the challenge shifted from losing weight to not gaining weight. I decided to make the diet less strict but also continue with my walks, though I would not do the regular 19km ones any more as they would just take too long (3 hours).

But by June, I really started to notice a change: I wouldn’t feel these walks at all any more. No sweat, no reasonable change in heart rate while on them, no tiredness. The walks really felt like a waste of time.

So I started running.

I never liked running. I was always bad at it. All the way through school where I was the slowest and always felt really bad afterward, through my life until now where I just never did it. Running felt bad and I hated it.

But now things were different.

The first time I changed from walking to running, I did so after reaching the peak altitude, so it was mostly straight and a little bit downhill. But still: I ran 4km (2.48 miles) and when I got home I didn’t feel much more tired.

I was very surprised because running 4km through all of my life would have been completely unthinkable to me, but there I was. I just did.

So next day, I decided to run most of the way, just skipping the steepest parts. Suddenly, there I was, running 8km (4.9 miles), still not feeling particularly tired afterwards.

So I started tracking these runs (using both Runkeeper and Strava for technical reasons – but that’s another post), seeing improvement in my time all the way through July.

And then, on August 1st, I ran half a Marathon climbing 612m (2007 feet)

Screen Shot 2017-08-04 at 11.40.28.png

Considering that this was my first, it’s not even in too bad a time and what’s even more fun to me: I didn’t even feel too tired afterwards and I totally felt like I could run even farther.

So I guess after taking it very slowly and moving from walking a bit to walking more to walking a lot to running a bit to running some more, even I, the most unathletic person possible can push myself into shape.

But what’s the most interesting aspect in all of this is that without technology, the Apple Watch in particular, without the cheesy achievements, none of this would ever have been possible. I hated sports and I’m honestly still not really interested. But the prospect of getting awarded some stupid batches every day is what finally pushed me.

And now, in only a single month, my girlfriend will finally return to Switzerland and I guess she’ll find me in better shape than she’s ever seen me before in our lives. I hope that the prospect of collecting some more batches from my watch will keep me going even when the social pressure might want to tempt me into skipping a workout.

A rant on brace placement

Many people consider it to be good coding style to have braces (in language that use them for block boundaries) on their own line. Like so:

function doSomething($param1, $param2)
{
    echo "param1: $param1 / param2: $param2";
}

Their argument usually is that it clearly shows the block boundaries, thus increasing the readability. I, as a proponent of placing bracers at the end of the statement opening the block, strongly disagree. I would format above code like so:

function doSomething($param1, $param2){
    echo "param1: $param1 / param2: $param2";
}

Here is why I prefer this:

  • In many languages code blocks don’t have their own identity – functions have, but not blocks (they don’t provide scope). Placing the opening brace on its own line, you emphasize the block but you actually make it harder to see what caused the block in the first place.
  • Using correct indentation, the presence of the block should be obvious anyways. There is no need to emphasize it more (at the cost of readability of the block opening statement).
  • I doubt that using one line per token really makes the code more readable. Heck… why don’t we write that sample code like so?
function
doSomething
(
$param1,
$param2
)
{
    echo "param1: $param1 / param2: $param2";
}

PostgreSQL on Ubuntu

Today, it was time to provision another virtual machine. While I’m a large fan of Gentoo, there were some reasons that made me decide to gradually start switching over to Ubuntu Linux for our servers:

  • One of the large advantages of Gentoo is that it’s possible to get bleeding edge packages. Or at least you are supposed to. Lately, it’s taking longer and longer for an ebuild of an updated version to finally become available. Take PostgreSQL for example: It took about 8 months for 8.2 to become available and it looks like history is repeating itself for 8.3
  • It seems like there are more flamewars than real development going on in Gentoo-Land lately (which in the end leads to above problems)
  • Sometimes, init-scripts and stuff changes over time and there is not always a clear upgrade-path. emerge -u world once, then forget to etc-update and on next reboot, hell will break loose.
  • Installing a new system takes ages due to the manual installation process. I’m not saying it’s hard. It’s just time-intensive

Earlier, the advantage of having current packages greatly outweighted the issues coming with Gentoo, but lately, due to the current state of the project, it’s taking longer and longer for packages to become available. So that advantage fades away, leaving me with only the disadvantages.

So at least for now, I’m sorry to say, Gentoo has outlived it’s usefulness on my productive servers and has been replaced by Ubuntu, which albeit not being bleeding-edge with packages, at least provides a very clean update-path and is installed quickly.

But back to the topic which is the installation of PostgreSQL on Ubuntu.

(it’s ironic, btw, that Postgres 8.3 actually is in the current hardy beta, together with a framework to concurrently use multiple versions whereas it’s still nowhere to be seen for Gentoo. Granted: An experimental overlay exists, but that’s mainly untested and I had some headaches installing it on a dev machine)

After installing the packages, you may wonder how to get it running. At least I wondered.

/etc/init.d/postgresql-8.3 start

did nothing (not very nice a thing to do, btw). initdb wasn’t in the path. This was a real WTF moment for me and I assumed some problem in the package installation.

But in the end, it turned out to be an (underdocumented) feature: Ubuntu comes with a really nice framework to keep multiple versions of PostgreSQL running at the same time. And it comes with scripts helping to set up that configuration.

So what I had to do was to create a cluster with

pg_createcluster --lc-collate=de_CH --lc-ctype=de_CH -e utf-8 8.3 main

(your settings my vary – especially the locale settings)

Then it worked flawlessly.

I do have some issues with this process though:

  • it’s underdocumented. Good thing I speak perl and bash, so I could use the source to figure this out.
  • in contrast to about every other package in Ubuntu, the default installation does not come with a working installation. You have to manually create the cluster after installing the packages
  • pg_createcluster –help bails out with an error
  • I had /var/lib/postgresql on its own partition and forgot to remount it after a reboot which caused the init-script to fail with a couple of uninitialized value errors in perl itself. This should be handeled cleaner.

Still. It’s a nice configuration scheme and a real progress from gentoo. The only thing left for me now is to report these issues to the bugtracker and hope to see this fixed eventually. And it it isn’t, there is this post here to remind me and my visitors.

Another new look

It has been a while since the last redesign of gnegg.ch, but is a new look after just a little more than one year of usage really needed?

The point is that I have changed blogging engines yet again. This time it’s from Serendipity to Word Press.

What motivated the change?

Interestingly enough, if you ask me, s9y is clearly the better product than WordPress. If WordPress is Mac OS, then s9y is Linux: It has more features it’s based on cleaner code, it doesn’t have any commercial backing at all. So the question remains: Why switch?

Because that OSX/Linux-analogy also works the other way around: s9y is an ugly duckling compared to WP. External tools won’t work (well) with s9y due to it not being known well enough. The amount of knobs to tweak is sometimes overwhelming and the available plugins are not nearly as polished as the WP ones.

All these are reasons to make me switch. I’ve used a s9y to wp converter, but some heavy tweaking was needed to make it actually transfer category assignements and tags (the former didn’t work, the latter wasn’t even implemented). Unfortunately, the changes were too hackish to actually publish them here, but it’s quite easily done.

Aside of that, most of the site has survived the switch quite nicely (the permalinks are broken once again though), so let’s see how this goes :-)

Impressed by git

The company I’m working with is a Subversion shop. It has been for a long time – since fall of 2004 actually where I finally decided that the time for CVS is over and that I was going to move to subversion. As I was the only developer back then and as the whole infrastructure mainly consisted of CVS and ViewVC (cvsweb back then), this move was an easy one.

Now, we are a team of three developers, heavy trac users and truly dependant on Subversion which is – mainly due to the amount of infrastructure that we built around it – not going away anytime soon.

But none the less: We (mainly I) were feeling the shortcomings of subversion:

  • Branching is not something you do easily. I tried working with branches before, but merging them really hurt, thus making it somewhat prohibitive to branch often.
  • Sometimes, half-finished stuff ends up in the repository. This is unavoidable considering the option of having a bucket load of uncommitted changes in the working copy.
  • Code review is difficult as actually trying out patches is a real pain to do due to the process of sending, applying and reverting patches being a manual kind of work.
  • A pet-peeve of mine though is untested, experimental features developed out of sheer interest. Stuff like that lies in the working copy, waiting to be reviewed or even just having its real-life use discussed. Sooner or later, a needed change must go in and you have the two options of either sneaking in the change (bad), manually diffing out the change (hard to do sometimes) or just forget it and svn revert it (a real shame).

Ever since the Linux kernel first began using Bitkeeper to track development, I knew that there is no technical reason for these problems. I knew that a solution for all this existed and that I just wasn’t ready to try it.

Last weekend, I finally had a look at the different distributed revision control systems out there. Due to the insane amount of infrastructure built around Subversion and not to scare off my team members, I wanted something that integrated into subversion, using that repository as the official place where official code ends up while still giving us the freedom to fix all the problems listed above.

I had a closer look at both Mercurial and git, though in the end, the nicely working SVN integration of git was what made me have a closer look at that.

Contrary to what everyone is saying, I have no problem with the interface of the tool – once you learn the terminology of stuff, it’s quite easy to get used to the system. So far, I did a lot of testing with both live repositories and test repositories – everything working out very nicely. I’ve already seen the impressive branch merging abilities of git (to think that in subversion you actually have to a) find out at which revision a branch was created and to b) remember every patch you cherry-picked…. crazy) and I’m getting into the details more and more.

On our trac installation, I’ve written a tutorial on how we could use git in conjunction with the central Subversion server which allowed me to learn quite a lot about how git works and what it can do for us.

So for me it’s git-all-the-way now and I’m already looking forward to being able to create many little branches containing many little experimental features.

If you have the time and you are interested in gaining many unexpected freedoms in matters of source code management, you too should have a look at git. Also consider that on the side of the subversion backend, no change is needed at all, meaning that even if you are forced to use subversion, you can privately use git to help you manage your work. Nobody would ever have to know.

Very, very nice.