Tag Archive: Cabling

Another series of articles. So far in my blog, I’ve concentrated on how to get routed networks running with basic configuration. But at some point, you may want to refine the configuration to provide better security, better failover, less chance for unexpected issues, and if possible make things less CPU and memory intensive as well.

While I was recently designing and implementing a MPLS network, it got clear that using defaults everywhere wasn’t the best way to proceed. As visible in the MPLS-VPN article, several different protocols are used: BGP, OSPF and LDP. Each of these establishes a neighborship with the next-hop, all using different hello timers to detect issues: 60 seconds for BGP, 10 seconds for OSPF and 5 seconds for LDP.

First thing that comes to mind is synchronizing these timers, e.g. setting them all to 5 seconds and a 15 second dead-time. While this does improve failover, there’s three keepalives going over the link to check if the link works, and still several seconds of failover. It would be better to bind all these protocols to one common keepalive. UDLD comes to mind, but that’s to check fibers if they work in both directions, it needs seconds to detect a link failure, and only works between two adjacent layer 2 interfaces. The ideal solution would check layer 3 connectivity between two routing protocol neighbors, regardless of switched path in between. This would be useful for WAN links, where fiber signal (the laser) tends to stay active even if there’s a failure in the provider network.


Turns out this is possible: Bidirectional Forwarding Detection (BFD) can do this. BFD is an open-vendor protocol (RFC 5880) that establishes a session between two layer 3 devices and periodically sends hello packets or keepalives. If the packets are no longer received, the connection is considered down. Configuration is fairly straightforward:

Router(config-if)#bfd interval 50 min_rx 50 multiplier 3

The values used above are all minimum values. The first 50 for ‘interval’ is how much time in milliseconds there is between hello packets. The ‘mix_rx’ is the expected receive rate for hello packets. Documentation isn’t clear on this and I was unable to see a difference in reaction in my tests if this parameter was changed. The ‘multiplier’ value is how many hello packets kan be missed before flagging the connection as down. The above configuration will trigger a connection issue after 150 ms. The configuration needs to be applied on the remote interface as well, but that will not yet activate BFD. It needs to be attached to a routing process on both sides before it starts to function. It will take neighbors from those routing processes to communicate with. Below I’m listing the commands for OSPF, EIGRP and BGP:

Router(config)#router ospf 1
Router(config-router)#bfd all-interfaces
Router(config)#router eigrp 65000
Router(config-router)#bfd all-interfaces
Router(config)#router bgp 65000
Router(config-router)#neighbor fall-over bfd

This makes the routing protocols much more responsive to link failures. For MPLS, the LDP session cannot be coupled with BFD on a Cisco device, but on a Juniper it’s possible. This is not mandatory as the no frames will be sent on the link anymore as soon as the routing protocol neighborships break and the routing table (well, the FIB) is updated.

Result: fast failover, relying on a dedicated protocol rather than some out-of-date default timers:

Router#show bfd neighbor

NeighAddr                         LD/RD    RH/RS     State     Int                       1/1     Up        Up        Fa0/1

Jun 29 14:16:21.148: %OSPF-5-ADJCHG: Process 1, Nbr on FastEthernet0/1 from FULL to DOWN, Neighbor Down: BFD node down

Not bad for a WAN line.


OSI Layer 1, part II: fiber

While part I covered copper, fiber standards differ. Also, fiber always uses a dedicated cable for each direction, so it’s always full-duplex. The official fiber standards for Ethernet (using small ‘x’ as a wildcard):

802.3j – 10BASE-F
One letter up from 10BASE-T is the standard for 10 Mbps over fiber. It’s never been widely adopted, most likely because fiber was (is) more expensive compared to existing (telephone) copper wiring so no new investments were done, just to get the same speed.

802.3u – 100BASE-FX
Yes, the same standard as copper, they were defined together. Note that 100BASE-SX products were also made by many vendors, but it was never officially made a standard. It was significantly cheaper compared to 100BASE-FX . The first uses lasers and can go up to 2 km on multi-mode fiber, while the latter often used cheaper LEDs but only went up to 550 m.

802.3z – 1000BASE-X
The gigabit standard for fiber was defined before the copper standard. The standard defines multiple different cables and wavelenghts, but generally speaking it allows multi-mode fiber up to 550 m and single-mode fiber up to 5 km. Longer distances are possible using higher quality fibers.

802.3ae – 10GBASE-xx
The standard defines multiple modes of operation. In multi-mode, most used standards are 10GBASE-SR (400 m) and 10GBASE-LRM (802.3aq, 220 m). Single-mode has 10GBASE-LR (10 km) and 10GBASE-ER (40 km).

802.3ba – 40GBASE-xR4, 100GBASE-xR4 & 100GBASE-xR10
One standard defining two different speeds. For 40 GE, the -xR4 means four different physical wires in each direction are used. These cables have eight or twelve smaller fiber cables inside (in case of twelve cables, four are currently unused), each running at 10 Gbps. Data is spread across these fibers in a sort of ‘layer 1 port-channel’ fashion.
There’s not much information on 100 GE cable types yet. It seems either 10 fiber strands are used in each direction, at 10 Gbps, or 4 fibers at 25 Gbps each.
The distance is the same for both: 100-125 m over multi-mode fiber (depending on the quality: OM3 or OM4) and 10 km over single-mode fiber.

Cable types
There are three different types of cables: multi-mode step index, multi-mode graded index, and single mode fiber.


Source: Wikipedia

Multi-mode step index is widely used: typically usable up to a few hundred meters, relatively cheap.  The graded index is similar, except due to the different (graded) densities of the glass inside the cable there’s no single reflection surface, but rather a ‘bending’ of the laser inside. This gives less attenuation (weakening) of the signal.
Single mode uses a very small core fiber, so the laser generally follows a more straight path towards the next device. This results in much less attenuation and allows the laser to cross a distance of multiple kilometers.

Propagation speed
A widely accepted thought is that fiber is faster compared to copper, because light propagates at 300,000 km/s and electrical signals at about 200,000 km/s.
However, in a recent session about ultra low latency designs, Lucien Avramov proved this to be a misconception: a typical multi-mode fiber has a refraction index of 1,5 because the lasers bounces (or bends) off the internal surface of the fiber, making the signals propagate at about… 200,000 km/s. Copper and fiber are the same in this regard, with signals travelling at 5 ns (nanoseconds) per meter. Fastest cable? A twinax cable, at 4.3 ns per meter, due to the higher quality metal inside, allowing for faster propagation. However, a twinax is only limited up to 5 meters in passive mode and 10 meters in active mode. Taking into account that a typical copper SFP connector and active twinax connector introduces more latency than a fiber SFP, fiber is still the best option for ultra low latency environments were you need to run more than 5 meters of cable.

That’s the speed and cable types, but what about the connectors? Often fiber connectors aren’t present on a networking device, but rather plug-in connectors are used, in most cases hot-swappable. For 100 and 1000 Mbps, on older switches, GBIC connectors are used:


These are very wide and take up a lot of space. For this reason, Small Form-Factor Pluggable (SFP) connectors were made, on which mode gigabit fibers (and copper cables too) terminate these days:


For 10 Gbps, SFP+ modules are used, which look nearly identical to the SFP modules. An SFP module also fits in a SFP+ slot. These SFP and SFP+ interfaces are the same size as typical RJ-45 interfaces, so switches with 24 SFP ports are not uncommon.
40 Gbps currently uses no clearly defined module, but often these are used:


This is a 40 GE cable. The modules are attached to the cables. This is a thick cable as there are 8 or 12 smaller ones inside.
Finding an image of a 100 GE cable proves to be impossible for now, but for comparison an image of the 100 GE module of a Nexus 7000:


These are just two ports, yet they cover most of the front panel. Most likely, towards the future, smaller formats will be introduced.

So, theory, starting from the bottom up. In this part, I’ll cover wired Ethernet over UTP standards. The official standards to date are:

802.3i – 10BASE-T
The first widespread standard. Defaults to half-duplex and uses one copper pair for transmitting, and one for receiving. This leaves two of the four copper pairs in a Cat 5 UTP cable unused.
Requires a Cat 3 cable or higher.

802.3u – 100BASE-TX
Second widespread standard, same default half-duplex, same two copper pairs.
Requires a Cat 5 cable or higher, despite only two used pairs. Ironically, this standard introduced duplex autonegotiation, to which 10 Mbps support was added later on.

802.3ab – 1000BASE-T
Third standard, 1 Gbps. Uses all four copper pairs in a wire. Assumes full duplex. Depending on the implementation, it may try to fall back to half duplex if it can detect one of the cable pairs is damaged. Some implementations instead fall back to 100 Mbps, which does not need all pairs, or just don’t bring up the link at all.
Requires a Cat 5 cable or higher, with Cat 5e recommended (Cat 5e is the same as Cat 5, but the technical requirements are enforced more strictly).

802.3an – 10GBASE-T
This standard only uses full duplex. Half duplex is not an option and thus CSMA/CD (Collision Detection) is no longer present. Unlike previous standards, where the required cable could go up to 100 meters, 10GBASE-T has two types of cable: Cat 6, with a maximum of 55 meters in a low-interference environment and 37 meters recommended in a high-interference environment, and Cat 6a, which goes up to the usual 100 meters.

These are the speed standards over UTP.  But how do interfaces negotiate link speed and duplex?

Duplex and speed are determined using fast link pulses (FLP). Despite the name ‘autonegotiation’, it’s not really a negotiation process. Each interface on a link sends out a series of FLPs: 17 pulses, 125 microseconds apart. Between these pulses (at 62.5 microseconds from each FLP) an additional pulse may be present: if present, it’s a ‘1’, if not, it’s a ‘0’. This way, 16 boolean values are sent over the cable. These contain the supported interface modes (10 Mbps, 100 Mbps, half/full duplex). The last bit, when set to ‘1’, means another page will follow, again a series of FLPs. While 100 Mbps interfaces ignore this, 1 Gbps capable interfaces will check the following pages too because these list gigabit and 10-gigabit support. Both sides of the link will then compare capabilities and the highest common capability is chosen.

If one of both sides is set to static configuration and doesn’t send out FLPs, the interface will try to sense a carrier signal and adapt (10 and 100 Mbps have different carrier signals). Duplex mode can’t be sensed and will default to half duplex. For gigabit it’s slightly different: the standard requires autonegotiation. I haven’t found any confirmation in any documentation, but it seems setting the speed and duplex only changes the FLP values.

I originally read some papers about fast link pulses but I can’t find the source URLs anymore. Wikipedia, as often, did provide many details consistent with what I’ve read.

Up next in part II: some more details about fiber!