Tag Archive: Tunnel


This article is not really written with knowledge usable for a production network in mind. It’s more of an “I have not failed. I’ve just found 10,000 ways that won’t work.” kind of article.

I’m currently in a mailing group with fellow network engineers who are setting up GRE tunnels to each others home networks over the public internet. Over those networks we speak (external) BGP towards each other and each engineer announces his own private address range. With around 10 engineers so far and a partial mesh of tunnels, it gives a useful topology to troubleshoot and experiment with. Just like the real internet, you don’t know what happens day-to-day, neighborships may go down or suddenly new ones are added, and other next-hops may become more interesting for some routes suddenly.

SwitchRouting1

But of course it requires a device at home capable of both GRE and BGP. A Cisco router will do, as will Linux with Quagga and many other industrial routers. But the only device I currently have running 24/7 is my WS-C3560-8PC switch. Although it has an IP Services IOS, is already routing and can do GRE and BGP, it doesn’t do NAT. Easy enough: allow GRE through on the router that does the NAT in the home network. Turns out the old DD-WRT version I have on my current router doesn’t support it. Sure I can replace it but it would cost me a new router and it would not be a challenge.

SwitchRouting2

Solution: give the switch a direct public IP address and do the tunnels from there. After all, the internal IP addresses are encapsulated in GRE for transport so NAT is required for them. Since the switch already has a default route towards the router, set up host routes (a /32) per remote GRE endpoint. However, this still introduces asymmetric routing: the provider subnet is a connected subnet for the switch, so incoming traffic will go through the router and outgoing directly from the switch to the internet without NAT. Of course that will not work.

SwitchRouting3

So yet another problem to work around. This can be solved for a large part using Policy-Based Routing (PBR): on the client VLAN interface, redirect all traffic not meant for a private range towards the router. But again, this has complications: the routing table does not reflect the actual routing being done, more administrative overhead, and all packets originated from the local switch will still follow the default (the 3560 switch does not support PBR for locally generated packets).

Next idea: it would be nice to have an extra device that can do GRE and BGP directly towards the internet and my switch can route private range packets towards it. But the constraint is no new device. So that brings me to VRFs: split the current 3560 switch in two: one routing table for the internal routing (vrf MAIN), one for the GRE tunnels (vrf BGP). However, to connect the two VRFs on the same physical device I would need to loop a cable from one switchport to another, and I only have 8 ports. The rest would work out fine: point private ranges from a VLAN interface in one VRF to a next-hop VLAN interface over that cable in another VRF. That second VRF can have a default route towards the internet and set up GRE tunnels. The two VRFs would share one subnet.

SwitchRouting4

Since I don’t want to deal with that extra cable, would it be possible to route between VRFs internally? I’ve tried similar actions before, but those required a route-map and a physical incoming interface. I might as well use PBR if I go that way. Internal interfaces for routing between VRFs exist on ASR series, but not my simple 8-port 3560. But what if I replace the cable with tunnel interfaces? Is it possible to put both endpoints in different VRFs? Yes, the 15.0(2) IOS supports it!

SwitchRouting5

The tunnel interfaces have two commands that are useful for this:

  • vrf definition : just like on any other layer 3 interface, it specifies the routing table of the packets in the interface (in the tunnel).
  • tunnel vrf :  specifies the underlying VRF from which the packets will be sent, after GRE encapsulation.

With these two commands, it’s possible to have tunnels in one VRF transporting packets for another VRF. The concept is vaguely similar to MPLS-VPN,  where your intermediate (provider) routers only have one routing table which is used to transport packets towards routers that have the VRF-awareness (provider-edge).

interface Vlan2
ip address 192.168.2.1 255.255.255.0
interface Vlan3
ip address 192.168.3.1 255.255.255.0
interface Tunnel75
vrf forwarding MAIN
ip address 192.168.7.5 255.255.255.252
tunnel source Vlan2
tunnel destination 192.168.3.1
interface Tunnel76
vrf forwarding BGP
ip address 192.168.7.6 255.255.255.252
tunnel source Vlan3
tunnel destination 192.168.2.1

So I configure two tunnel interfaces, both in the main routing table. Source and destination are two IP addresses locally configured on the router.  I chose VLAN interface, loopbacks will likely work as well. Inside the tunnels, one is set to the first VRF, the other to the second. One of the VRFs may be shared with the main (outside tunnels) routing table, but it’s not a requirement. Configure both tunnel interfaces as two sides of a point-to-point connection and they come up. Ping works, and even MTU 1500 works over the tunnels, despite the show interface command showing an MTU of only 1476!

Next, I set up BGP to be VRF-aware. Logically, there are two ‘routers’, one of which is the endpoint for the GRE tunnels, and another one which connects to it behind it for internal routing. Normally if it were two physical routers, I would set up internal BGP between them since I’m already using that protocol. But there’s no difference here: you can make the VRFs speak BGP to each other using one single configuration.

router bgp 65000
address-family ipv4 vrf MAIN
neighbor 192.168.7.6 remote-as 65000
network 192.168.0.0 mask 255.255.248.0
neighbor 192.168.7.6 activate
exit-address-family
address-family ipv4 vrf BGP
bgp router-id 192.168.7.6
neighbor 192.168.7.5 remote-as 65000
neighbor 192.168.7.5 activate
exit-address-family

A few points did surface: you need to specify the neighbors (the IP addresses of the local device in the different VRFs) under the correct address families. You also need to specify a route distinguisher under the VRF as it is required for VRF-aware BGP. And maybe the most ironic: you need a bgp router-id set inside the VRF address-family so it differs from the other VRF (the highest interface IP address by default), otherwise the two ‘BGP peers’ will notice the duplicate router-id and it will not work. But after all of that, BGP comes up and routes are exchanged between the two VRFs! For the GRE tunnels towards the internet, the tunnel vrf command is required in the GRE tunnels so they use the correct routing table for routing over the internet.

So what makes this not production-worthy? The software-switching.

The ASIC can only do a set number of actions in a certain sequence without punting towards the switch CPU. Doing a layer 2 CAM table lookup or a layer 3 RIB lookup is one thing. But receiving a packet, have the RIB pointing it to a GRE tunnel, encapsulate, decapsulate and RIB lookup of another VRF is too much. It follows the expected steps in the code accordingly, the IOS software does not ‘see’ what the point is and does not take shortcuts. GRE headers are actually calculated for each packet traversing the ‘internal tunnel’ link. I’ve done a stress test and the CPU would max out at 100% at… 700 kBps, about 5,6 Mbps. So while this is a very interesting configuration and it gives an ideal situation to learn more, it’s just lab stuff.

So that’s the lesson, as stated in the beginning: how not to do it. Can you route between VRFs internally on a Cisco switch or router (not including ASR series)? Yes. Would you want to do it? No!

A little bit of everything.

Yes, a bit of everything, that’s what it has been lately. First, I’m upgrading my home lab switches with more recent IOS versions. The 3560 on my desk can now run EIGRP for IPv6. My 2970 gigabit switch will follow tomorrow, with a K9 IOS this time to make it accessible via SSH.

Second is that I’ve been fine tuning my knowledge of layer 2 security features, using my 3560 desk switch and a 3750 test switch as subjects. RA Guard works great, and so does DHCP Snooping. DHCP Snooping has revealed a third functionality to me, next to countering rogue DHCP servers and preventing DHCP flooding: it also detects when a MAC address that sends an INFORM cannot be present on that port according to the mac address-table. It will generate a ‘%DHCP_SNOOPING-5-DHCP_SNOOPING_MATCH_MAC_FAIL’ message and drop the frame. Seems to be a functionality related to ARP Inspection.
And ARP Inspection, on the other hand, requires some planning of your DHCP servers: if multiple are present and they all reply at the same time, the DHCP Snooping feature, on which ARP Inspection relies, sometimes picks the wrong packet to add to it’s binding table. The client device selects another packet of the ones it received to configure itself, and thus ARP Inspection thinks there’s spoofing going on. I’m still figuring out how to effectively counter that.

Third is that I’ve ordered the CCIE Routing and Switching Certification Guide, 4th Edition hardcover, so I have a lot of reading to be done soon. I have to admit that I don’t like to read ebooks on a big screen so far, and I’m reluctant to buy a reader.
Yesterday I also tried a MPLS lab for the first time, with BGP-MP in GNS3. It did take me several hours but I managed to get it running. Not bad for never having done anything MPLS related before. Still, it’s a huge topic and I’ll need to learn a lot more about that.

And last, I tested an Aruba Remote Access Point (RAP). I’ve already tested Instant Access Points. The RAP works different: once booted, it needs an internet connection. When connecting a computer (it has LAN interfaces, just like a consumer-grade router), it redirects to a setup page, where you have to enter the public IP address of a Wireless LAN Controller (WLC). It then tries to negotiate a tunnel through NAT-T over UDP port 4500 to that WLC. It works by encapsulating IPsec in a UDP header, bypassing any NAT devices that are incapable of keeping the NAT state of IPsec.
The RAP tries to authenticate itself at the WLC using his MAC address. After whitelisting it and configuring a wireless profile (which contains the list of SSIDs to send out), I had to reboot the RAP. I ended up rebooting it several times, thinking it didn’t work, but eventually it turned out my cable had broken due to all the times I plugged it in and out again. The RAP booted fine and started sending out the correct SSIDs. Initially, the wireless connection didn’t hand out an IP to me, but after five minutes, everything suddenly got an IP and started working as if there had never been a problem. Not sure why this happened, although I suspect my NAT router of dropping some of the UDP packets (which wouldn’t be the first time).

A little bit of everything indeed.

Different types of VPN explained.

Since I’m going to talk more about VPNs in the upcoming weeks, I’m going to explain the different types of VPN here. No configuration guides, but an explanation so it’s clear what is what.

For those who aren’t sure what a VPN is: a Virtual Private Network is an encrypted connection between two or more devices over a public network. Some may argue that it doesn’t necessarily has to be encrypted, but when it’s not, that’s called a tunnel (for me at least). Here’s a list of the types:

S2SVPN

Site-to-site VPN
Often abbreviated to S2SVPN. It’s a connection between two sites and encrypts all traffic between two (or multiple) subnets. There are two types of S2SVPN:

  • Policy-based: interesting traffic triggers an ACL and is encrypted and sent to the remote VPN peer.
  • Routed: traffic is routed into an encrypted tunnel to the remote VPN peer.

For a detailed explanation and configuration, Jeremy made some excellent posts about this on Packetlife: Part 1 for policy-based and Part 2 for routed.

DMVPN

DMVPN
A dynamic multipoint VPN is not a protocol but more a technique using different protocols. One or more central hub routers are required, but the remote (spoke) routers can have dynamic IPs and more can be added without having to modify the configuration on the hub router(s), or any other spoke routers. The routers use a next-hop resolution protocol, combined with a dynamic routing protocol to discover remote peers and subnets. The VPN itself is a mGRE tunnel (GRE with multiple endpoints) which is encrypted. This way, traffic between spoke routers does not have to go through the hub router but can be sent directly from spoke to spoke.

ClientVPN

Client VPN
A client VPN is an encrypted connection from one device towards a VPN router. It makes that one remote device appear as a member of a local subnet behind the VPN router. Traffic is tunneled from the device (usually a computer or laptop of a teleworker) towards the VPN router so that user has access to resources inside the company. It requires client software that needs to be installed and configured.

SSLVPN

SSLVPN
This type of VPN works like a client VPN. The difference is that the remote client does not need preconfigured software, but instead the browser acts as VPN software. The browser needs to support active content, which every modern browser supports, either directly or through a plug-in. Traffic is tunneled over SSL (or TLS) to the SSLVPN router. From a networking perspective, traffic is tunneled over layer 4 instead of layer 3. The benefit is that the remote user does not need to configure anything and can simply log in to a web page to start the tunnel. The drawback that you’ll likely need a dedicated device as SSLVPN endpoint because this is not a standard feature.

I’ve written about VXLAN before: it’s a proposed technology to tunnel frames over an existing IP network, allowing for much more than the 4096 VLAN limit. When writing that article, an RFC draft was proposed, which expires this month.

Coincidentally or not, Cisco has just released some new switching products, among which a new version of the Nexus 1000V, which claims to support VXLAN. Given the recent release of IBM’s 5000V virtual switch for VMware products, we’re seeing a lot of innovation done in this market segment lately, and it will surely not be the last. As I have yet to test a NX1000V, I’m unsure what VXLAN support means in real life, how it will impact network topologies, and what issues may arise. Two things stand out very clear to me: VXLAN (or any other tunneling over IP) introduces an extra layer of complexity in the network, but at the same time, it allows you to be more flexible with existing layer 2 and layer 3 boundaries as VXLAN does not require any virtual machines to be in the same (physical) VLAN for broadcast-related things, like vMotion for example.

I do have doubts that at this point in time there is a lot of interest towards these products. vSphere and competitors are delivered with a vSwitch present, so it’s less likely to be invested in: ‘There already is a switch, why place a new one?’. But the market is maturing and eventually, vSwitch functionality will become important for any data center.

Also, last but not least, special thanks to Ivan Pepelnjak and Scott Lowe. They both have excellent blogs with plenty of data center related topics, and I often read new technologies first on their blogs before anything else.

I’ve already set up an IPv6 tunnel on three platforms: Vyatta, Cisco and Windows Server. This time, the same on OpenBSD. I’m not going to repeat myself, so for details about an IPv6 tunnel and how to get one, check the IPv6 tunnel article. I’ll be using  the same example values again:
Local IPv6 subnet:  2001:0:0:1234::/64
Tunnel subnet: 2001:0:0:1235::/64, with ::2 on our side and ::1 on the other endpoint side.
IPv6 DNS: 2000::2000
Device IPv4 address: 192.168.0.10
Tunnel endpoint: 50.60.70.80
Gateway to ISP: 192.168.0.1

I assume routing and IPv4 is configured properly already, with IP’s on interfaces and a default route towards the internet. If not, you’ve missed part I. Before starting the IPv6 part, remember that you’ll be creating a tunnel over an existing IPv4 network, so make sure pf allows the tunnel. I’ve added the following rules in /etc/pf.conf:

pass out quick on em0 from 192.168.0.10 to 50.60.70.80
pass in quick on em0 from 50.60.70.80 to 192.168.0.10

You’ll need to pass both ipv6ip and icmp, but since it’s just one trusted IP address, I’m doing a general rule. Don’t forget to activate the rule with ‘pfctl -f /etc/pf.conf’!

Next, creating the tunnel interface. In OpenBSD this is a ‘gif’, generic interface. To make it persist between reboots, create a /etc/hostname.gif0 file, zero for the first tunnel interface. The following lines go in the file:

tunnel 192.168.0.10 50.60.70.80
!ifconfig gif0 inet6 alias 2001:0:0:1235::2 2001:0:0:1234::1 prefixlen 128
!route add -inet6 default 2001:0:0:1235::1

The internal IP 192.168.0.10 is automatically translated by my router, but this may not always be the case. If not, use your external IP. The prefix length in the second line is 128, which is advised in the tunnelbroker configuration sample, but I’m not sure why. It wouldn’t work with 64 though. Finally, the third line adds a default route into the tunnel.

At this point the tunnel is up and running, but from the OpenBSD only. The devices on the connected subnet are not aware an IPv6 router is present. For this, the OpenBSD will have to send router advertisements. First, configure an IPv6 address on the interface, by adding the following line to /etc/hostname.em1:

!ifconfig em1 inet6 alias 2000:0:0:1234::1 prefixlen 64

Next, do the actual advertisements using the rtadvd deamon. In /etc/rc.conf, find the ‘rtadvd_flags:NO’ and change the ‘NO’ to the interface(s) that need it enabled, e.g. em1. Then create the file /etc/rtadvd.conf’ and enter the following:

em1:\
:addr=”2000:0:0:1234::”:prefixlen#64:

This advertises the /64 prefix on the interface. A lot of other options are possible, such as the other-config-flag and managed-config-flag for DHCP options and a IPv6 DNS server, but I will not go into detail about that now. Finally, keep in mind icmp is used for router advertisements and neighbor discovery (the ARP replacement), so you’ll need to allow these. In /etc/pf.conf:

pass out quick on em1 inet6 proto icmp6
pass in quick on em1 inet6 proto icmp6

Finally, add some rules based on what you want to filter, e.g. a general rule blocking everything IPv6 inbound, and allowing outgoing connection of any kind (for now):

pass out quick on gif0 inet6 from 2000:0:0:1234::/64 to any
block in on gif0

After this, surfing to ipv6.google.com is possible from any computer in the local subnet.

How to set up an IPv6 tunnel.

In an effort to promote IPv6 a bit more, I’m going to discuss three methods to set up an IPv6 tunnel today.

But first: what is an IPv6 tunnel and why would you need it? An IPv6 tunnel is a tunnel that transports IPv6 packets over an IPv4-only network, which is useful if you, like me, have an ISP that doesn’t use IPv6 addresses yet. By setting up the tunnel you can connect your local IPv6 network with the rest of the IPv6 internet. After configuring it, you should be able to surf to the Google IPv6 site.

Before you can begin configuring, you’ll first need an IPv6 provider. I used Hurricane Electric, others prefer Sixxs. Both are free. After registering on the site, you’ll receive a /64 subnet which is yours to use, as well as some details about setting up the connection. Yes, this means you get more IPv6 addresses for free to use in your living room than there are IPv4 addresses in the entire world.

After you have received your prefix we can begin configuration. Note that you’ll also receive a tunnel prefix which is used to configure the tunnel endpoints, as well as an IPv6 DNS server (which will require a DHCPv6 server to run on the network, annoying, I know). To make things consistent over the three configurations I’ll list example values that will be used:
Local IPv6 subnet:  2001:0:0:1234::/64
Tunnel subnet: 2001:0:0:1235::/64, with ::2 on our side and ::1 on the other endpoint side.
IPv6 DNS: 2000::2000
Device IPv4 address: 192.168.0.10
Tunnel endpoint: 50.60.70.80
Gateway to ISP: 192.168.0.1

I’m going to give the configuration for three types of device/operating system: a (virtualized) Vyatta 6.1, a (virtualized) Windows Server 2008 R2, and a Cisco 2691 router in GNS3. It is also possible to configure the tunnel on other devices (even an Apple Airport), but I have not tested those. Also, the tunnel used is an ipv6ip tunnel which uses IP Protocol 41. Since I’ll be passing through a NAT device (the ISP gateway), one of the tunnel endpoints will be a private address, which will be translated by the NAT device. You may need to put the IP in DMZ to forward the tunnel properly, or in my case, to forward the ICMP keepalives properly. And finally: the tunnel endpoint does not necessarily have two ethernet interfaces: the tunnel can be send out of the interface the IPv6 subnet is on.

Vyatta
Using the Vyatta as an IPv6 endpoint works stable and throughput is good. The basic version is free and it barely uses CPU, even when virtualized and under load, which makes for a nice endpoint without the need for a dedicated device. The configuration in the Vyatta is as following:

vyatta@vyatta:~$ configure
vyatta@vyatta# edit interfaces tunnel tun0
vyatta@vyatta# set encapsulation sit
vyatta@vyatta# set local-ip 192.168.0.10
vyatta@vyatta# set remote-ip 50.60.70.80
vyatta@vyatta# set address 2001:0:0:1235::2/64
vyatta@vyatta# set description “IPv6 Tunnel”
vyatta@vyatta# exit
vyatta@vyatta# set protocols static interface-route6 ::/0 next-hop-interface tun0
vyatta@vyatta# edit interfaces ethernet eth0
vyatta@vyatta# set address 192.168.0.10/24
vyatta@vyatta# set address 2001:0:0:1234::1/64
vyatta@vyatta# set ipv6 router-advert prefix 2001:0:0:1234::/64
vyatta@vyatta# exit
vyatta@vyatta# set system gateway-address 192.168.0.1
vyatta@vyatta# commit

Unfortunately, Vyatta currently does not properly support DHCPv6, so you can’t advertise the IPv6 DNS server to hosts in the subnet. In a dual stack environment this doesn’t break anything as the hosts will query the known IPv4 DNS servers, and those respond with IPv6 addresses in their payload if needed.

Cisco IOS
I can’t configure IPv6 on my 2611 routers, apparently they don’t have enough flash memory to store the right IOS version. The 3560 I have does support it with the Advanced IP Services IOS, but I don’t have that one, so I’m really out of luck here.
Update September 18, 2011: the 3560 has IPv6 support with the IP Services IOS as the Advanced IP Services is no longer used for a 3560, but there’s no support for tunneling as it can only be done in software and puts a heavy load on the CPU.

I resort to GNS3, where I set up a router and connect it to the physical network. The configuration of the tunnel is as following:

Router#configure terminal
Router(config)#ipv6 unicast-routing
Router(config)#interface Tunnel0
Router(config-if)#description IPv6 Tunnel
Router(config-if)#no ip address
Router(config-if)#ipv6 address 2001:0:0:1235::2/64
Router(config-if)#tunnel source 192.168.0.10
Router(config-if)#tunnel destination 50.60.70.80
Router(config-if)#tunnel mode ipv6ip
Router(config-if)#exit
Router(config)#ipv6 route ::/0 Tunnel0
Router(config)#interface FastEthernet0/0
Router(config-if)#ipv6 address 2001:0:0:1234::/64 eui-64
Router(config-if)#ipv6 nd prefix 2001:0:0:1234::/64
Router(config-if)#exit
Router(config)#ip default-gateway 192.168.0.1
Router(config)#end
Router#write

Note that, just like with the Vyatta, you have to tell the router which prefix to advertise over the subnet. I was unable to properly configure DHCPv6 so all hosts could get an IPv6 DNS server, despite best efforts. Either a command is not working as expected or I am doing it wrong, so for now, it will work just like the Vyatta, with hosts querying DNS by IPv4.

Windows Server 2008 R2
Despite that this is the only one of the three devices that has a GUI, most configuration on the Windows Server will be done through the command line as well. The netsh command allows you to manipulate the IP stack in detail, as shown in the following configuration:

C:\>netsh interface teredo set state disabled
C:\>netsh interface ipv6 add v6v4tunnel IPv6Tunnel 192.168.0.10 50.60.70.80
C:\>netsh interface ipv6 add address IPv6Tunnel 2001:0:0:1235::2
C:\>netsh interface ipv6 add route ::/0 IPv6Tunnel 2001:0:0:1235::1

The first command disables the build-in Teredo in Windows, which automatically tries to create an IPv6 tunnel in case such traffic is needed. If you care about security, I would recommend this command on all your Windows 7 computers.

Next, creating a gateway for the subnet. If you go to the Network & Sharing Center, choose Change Adapter Settings, you can give the network card facing the subnet a static IPv6 address, in this case 2001:0:0:1234::1/64. The gateway should be ::, DNS server 2000::2000.

Last, we make sure all hosts receive the DNS address. This requires the DHCP role installed on the Windows Server. If present, go to Server Manager, DHCP, and configure an IPv6 scope 2001:0:0:1234::/64. Next, in the scope options, add option 23, and fill in 2000::2000. Windows accepts this and does not give any warning, but I couldn’t get this to work without rebooting after this.

So, these are three methods to get your IPv6 tunnel working. I hope it is all clear, greetings!

Update October 18, 2011: the Cisco IOS and Vyatta configuration had a missing command: you need to configure a default-gateway, otherwise it will not know where to send the tunnel! Commands updated.

VLAN limit and the VXLAN proposal.

Today I stumbled across a nice RFC draft which proposes a new kind of network topology in data centers (thanks to Omar Sultan for the link on his blog). It’s four days old (at the time of writing) and is proposed by some mayor players in the data center market: it mentions Cisco, Red Hat, Citrix and VMware among others.

It proposes the use of VXLANs, or Virtual eXtensible Local Area Networks, which is basically a tunneling method to transport frames over an existing Layer 3 network. Personally, after reading through it, the first thing that came to mind was that this was another way to solve the large layer 2 domain problem that exists in data centers, in direct competition with TRILL, Cisco’s FabricPath, Juniper’s QFabric, and some other (mostly immature) protocols.

But then I realised it is so much more than that. It comes with 24 identifier bits instead of the 12 bits used with VLANs: an upgrade from 4,096 VLANs to 16.7 million VXLANs. Aside from this it also solves another problem: switch CAM tables would no longer need to keep track of all virtual MAC addresses used by VMs, but only the endpoints, which at first sight seem to be the physical servers only (I don’t think this is a big problem already. The draft claims ‘hundreds of VMs on a physical server’, which I find hard to believe, but with the increase of RAM and cores on servers this may become reality soon in every average data center). It also seems to have efficient mechanisms proposed for Layer 2 to Layer 3 address mapping and multicast traffic. Since it creates a Layer 2 tunnel, it would allow for different Layer 3 protocols as well.

Yet I still see some unsolved problems. What about QoS? Different VMs may need different QoS classifications. I also noticed the use of UDP, which I understand because this does not have the overhead of TCP, but I don’t feel comfortable sending important data on a best-effort basis. There is also no explanation what impact it will have on link MTU, though this is only a minor issue.

In any way, it’s an interesting draft, and time will tell…