Fiber7 TV behind PFSense

As I’ve stated previously, I’m subscribed to what is probably the coolest ISP on earth. Between the full symmetric Gbit/s, their stance on network neutrality, their IPv6 support and their awesome support even for advanced things like setting up an IPv6 reverse DNS delegation(!), there’s nothing you could ever wish for from an ISP.

For some time now, they have also provided an IPTV solution as an additional subscription called tv7.

As somebody who last watched live tv around 20 years ago, I wasn’t really interested to subscribe to that. However, contrary to many other IPTV solutions what’s special about the Fiber7 solution is that they are using IP multicast to deliver the unaltered DVB frames to their users.

For people interested in TV, this is great because it’s, for all intents and purposes, lag free as the data is broadcast directly through their network where interested clients can just pick it up (of course there will be some <1ms lag for the data to move through their network plus some additional <1ms lag as your router forwards the packets to your internal network).

As I never dealt with IP multicast, this was an interesting experiment for me, and when they released their initial offering, they provided a test-stream to see whether your infrastructure was multicast ready or not.

Back then, I never got it to work behind my PFSense setup but as I wasn’t interested in TV, I never bothered spending time on this, though it did hurt my pride.

Fast forward to about three weeks ago where I made a comment on twitter about that pride being hurt to the CEO of fiber7. He informed me that the test stream was down, but then he also sent me a DM to ask me whether I was interested in trying out their tv7 offering, including the beta version of their app for the AppleTV.

That was one evil way to nerd-snipe me, so naturally, I told him that, yes, I would be interested, but that I wasn’t really ever going to use it aside of just getting it to work, because live TV just doesn’t interest me.

Despite the fact that it was past 10pm, he sent me another DM, telling me that he has enabled tv7 for my account.

The rest of the night I spent experimenting with IGMP Proxy and the PFSense firewall to some varying success, but on the next day I was finally successful

You might notice that this is a screenshot of VLC. That’s no coincidence: While Fiber7 officially only supports the AppleTV app, they also offer links on a support page of theirs to m3u and xspf playlists that can be used by advanced users (which is another case of Fiber7 being awesome), so while debugging to make this work, I definitely preferred to using VLC which had a proper debug log.

After I got it to work, I also found a bug in the Beta version of the Fiber7 app where it would never unsubscribe from a multicast group, causing the traffic to my LAN to increase whenever I would switch channels in the app. The traffic wouldn’t decrease even if the AppleTV went to sleep – only a reboot would help.

I’ve reported this to Fiber7 and within a day or two, a new release was pushed to TestFlight in order to fix the issue.

Since this little adventure happened, Fiber7 has changed their offering: Now every Fiber7 account gets free access to tv7 which will probably broaden the possible audience quite a bit.

Which brings me to the second point of this post: To show you the configuration needed if you’re using a PFSense based gateway and you want to make use of tv7.

First, you have to enable the IGMP proxy:

Screen Shot 2018-05-22 at 16.31.15.png

For the LAN interface, please type in the network address and netmask of your internal IPv4 LAN.

What IGMP Proxy does is to listen to clients in your LAN joining to a multicast group and then joining on their behalf on the upstream interface. It will then forward all traffic received on the upstream aimed at the group to the group on the downstream interface. This is where the additional small bit of lag is added, but this is the only way to have multicast cross routing barriers.

This is also mostly done on your routers CPU, but at the 20MBit/s a stream consumes, this shouldn’t be a problem on more or less current hardware.

Anyways – if you want to actually watch TV, you’re not done yet because even though this service is now running, the built-in firewall will drop any packets related to multicast joining and all actual multicast packets containing the video frames.

So the next step is to update the firewall:

Create the following rules for your WAN interface:

Screen Shot 2018-05-22 at 16.39.07.png

You will notice that little gear icon next to the rule. What that means is that additional options are enabled. The extra option you need to enable is this one here:

Screen Shot 2018-05-22 at 16.41.31.png

I don’t really like the second of the two rules. In principle, you only need to allow a single IP: The one of your upstream gateway. But that might change whenever your IPv4 address changes and I don’t think you will want to manually update your firewall rule every time.

Instead, I’m allowing all IGMP traffic from the WAN net, trusting Fiber7 to not leak other subscriber’s IGMP traffic to my network.

Unfortunately, you’re still not quite done.

While this configures the rules for the WAN interface, the default “pass all” rule on the LAN interface will still drop all video packets because the above “Allow IP options” checkbox is off by default for the default pass all rule.

You have to update that too on the “LAN” interface:

Screen Shot 2018-05-22 at 16.46.47.png

And that’s all.

The network I’m listing there, 77.109.128.0/19 is not documented officially. Fiber7 might change that at any time at which point your nice setup will stop working and you’ll have to update the IGMP Proxy and Firewall configuration.

In my case, I’ve determined the network address by running

/usr/local/sbin/igmpproxy -d -vvvv /var/etc/igmpproxy.conf

and checking out the error message where igmpproxy was not allowing traffic to an unknown network. I’ve then looked up the network of the address using whois and updated my config accordingly.

Why I recommend against JWT

Json Web Tokens are all the rage lately. They are lauded as being a stateless alternative to server-side cookies and as the perfect way to use authentication in your single-page app and some people also sell them as a work around for the EU cookie policy because, you know, they work without cookies too.

If you ask me though, I would always recommend against the use of JWT to solve your problem.

Let me give you a few arguments to debunk, from worse to better:

Debunking arguments

It requires no cookies

General “best” practice stores JWT in the browsers local storage and then sends that off to the server in all authenticated API calls.

This is no different from a traditional cookie with the exception that transmission to the server isn’t done automatically by the browsers (which a cookie would be) and that it is significantly less secure than a cookie: As there is no way to set a value in local storage outside of JavaScript, there consequently is no feature equivalent to cookies’ httponly. This means that XSS vulnerabilities in your frontend now give an attacker access to the JWT token.

Worse, as people often use JWT for both a short-lived and a refresh token, this means that any XSS vulnerability now gives the attacker to a valid refresh token that can be used to create new session tokens at-will, even when your session has expired, in the process completely invalidating all the benefits of having separate refresh and access tokens.

“But at least I don’t need to display one of those EU cookie warnings” I hear you say. But did you know that the warning is only required for tracking cookies? Cookies that are required for the operation of your site (so a traditional session cookie) don’t require you to put up that warning in the first place.

It’s stateless

This is another often used argument in favour of JWT: Because the server can put all the required state into them, there’s no need to store any thing on the server end, so you can load-balance incoming requests to whatever app server you want and you don’t need any central store for session state.

In general, that’s true, but it becomes an issue once you need to revoke or refresh tokens.

JWT is often used in conjunction with OAuth where the server issues a relatively short-lived access token and a longer-lived refresh token.

If a client wants to refresh its access token, it’s using its refresh token to do so. The server will validate that and then hand out a new access token.

But for security reasons, you don’t want that refresh token to be re-used (otherwise, a leaked refresh token could be used to gain access to the site for its whole validity period) and you probably also want to invalidate the previously used access token otherwise, if that has leaked, it could be used until its expiration date even though the legitimate client has already refreshed it.

So you need a means to black-list tokens.

Which means you’re back at keeping track of state because that’s the only way to do this. Either you black-list the whole binary representation of the token, or you put some unique ID in the token and then blacklist that (and compare after decoding the token), but what ever you do, you still need to keep track of that shared state.

And once you’re doing that, you lose all the perceived advantages of statelessness.

Worse: Because the server has to invalidate and blacklist both access and refresh token when a refresh happens, a connection failure during a refresh can leave a client without a valid token, forcing users to log in again.

In todays world of mostly mobile clients using the mobile phone network, this happens more often than you’d think. Especially as your access tokens should be relatively short-lived.

It’s better than rolling your own crypto

In general, yes, I agree with that argument. Anything is better than rolling your own crypto. But are you sure your library of choice has implemented the signature check and decryption correctly? Are you keeping up to date with security flaws in your library of choice (or its dependencies).

You know what is still better than using existing crypto? Using no crypto what so ever. If all you hand out to the client to keep is a completely random token and all you do is look up the data assigned to that token, then there’s no crypto anybody could get wrong.

A solution in search of a problem

So once all good arguments in favour of JWT have dissolved, you’re left with all their disadvantages:

  • By default, the JWT spec allows for insecure algorithms and key sizes. It’s up to you to chose safe parameters for your application
  • Doing JWT means you’re doing crypto and you’re decrypting potentially hostile data. Are you up to this additional complexity compared to a single primary key lookup?
  • JWTs contain quite a bit of metadata and other bookkeeping information. Transmitting this for every request is more expensive than just transmitting a single ID.
  • It’s brittle: Your application has to make sure to never make a request to the server without the token present. Every AJAX request your frontend makes needs to manually append the token and as the server has to blacklist both access and refresh tokens whenever they are used, you might accidentally end up without a valid token when the connection fails during refresh.

So are they really useless?

Even despite all these negative arguments, I think that JWT are great for one specific purpose and that’s authentication between different services in the backend if the various services can’t trust each other.

In such a case, you can use very short-lived tokens (with a lifetime measured in seconds at most) and you never have them leave your internal network. All the clients ever see is a traditional session-cookie (in case of a browser-based frontend) or a traditional OAuth access token.

This session cookie or access token is checked by frontend servers (which, yes, have to have access to some shared state, but this isn’t an unsolvable issue) which then issue the required short-lived JW tokens to talk to the various backend services.

Or you use them when you have two loosely coupled backend services who trust each other and need to talk to each other. There too, you can issue short-lived tokens (given you are aware of above described security issues).

In the case of short-lived tokens that never go to the user, you circumvent most of the issues outlined above: They can be truly stateless because thank to their short lifetime, you don’t ever need to blacklist them and they can be stored in a location that’s not exposed to possible XSS attacks against your frontend.

This just leaves the issue of the difficult-to-get-right crypto, but as you never accept tokens from untrusted sources, a whole class of possible attacks becomes impossible, so you might even get away with not updating on an too-regular basis.

So, please, when you are writing your next web API that uses any kind of authentication and you ask yourself “should I use JWT for this”, resist the temptation. Using plain opaque tokens is always better when you talk to an untrusted frontend.

Only when you are working on scaling our your application and splitting it out into multiple disconnected microservices and you need a way to pass credentials between them, then by all means go ahead and investigate JWT – it’ll surely be better than cobbling something up for yourself.