One of the topics you encounter in a CCNA course is the switching methods used: store-and-forward, cut-through and fragment-free. Before I continue, a brief explanation of the methods:

  • Store-and-forward: the entire frame is received in an ingress buffer, a checksum is calculated to see if the frame is error-free, the destination MAC is located in the CAM table, and the frame is forwarded. Emphasis on transmitting error-free frames.
  • Cut-through: as soon as the frame header is received, the destination MAC is located in the CAM table and the frame is forwarded. Emphasis on low latency switching.
  • Fragment-free: the header is received, and the first 64 bytes of the frame. Collisions on a half-duplex media usually occur in the first 64 bytes. Emphasis on low latency, but also error-free on half-duplex media.

You have to know this for a CCNA exam, but besides that, it’s little more than theoretical knowledge. It’s never even explained how to change the method used on a switch, and which one is default. The reason for this is that it’s a historical command. On a 1900 series switch, you can still change it, using the ‘switching-mode store-and-forward’ command (the default is fragment-free). On all modern switches the default and only method is store-and-forward, both for consumer-grade switches and most business-grade. Notable exception are the Cisco Nexus switches, which use an updated cut-through method.

So why is the store-and-forward method widely used now? The reason is that it’s not that much more latency. At 10Gbps, we’re talking about a less than a microsecond of increased latency. At 1Gbps, a few microseconds maybe. But since latency is commonly measured in the millisecond range, that’s really not a problem. The Nexus series uses cut-through, although it acts like store-and-forward at times, e.g. when switching between ports using different speeds. My guess is that it’s another attempt at marketing. The only interesting argument is VMWare’s vMotion to move a virtual machine between physical hosts. This requires a latency below 5ms (the latest version, vSphere 5.0, should be able to handle 10ms), and since the Nexus series were designed for data centers, it may have a relation here. For in-dept information, see this document on the Cisco website.

A second thing you encounter in networking studies is the ‘ASIC’, or Application Specific Integrated Circuit. This is a chip designed to do only one task. Note that ASIC is not a network-related term only: besides switching ASICs, encryption ASICs exist, as well as ASICs designed for voice decoding, video decoding,…

An ASIC is supposed to be an improvement in a network environment, allowing for switching in hardware, taking the load off the CPU. This is certainly true: a Cisco switch transferring large volumes of data will barely see an increase in CPU, idleing at ~10%. On the other hand, these ASICs come with an increased cost for the switch. Vyatta (among others) makes a nice argument there, saying a modern multicore x86 system is not that expensive, and with +2Ghz per core, there’s enough headroom for gigabit and even 10 gigabit switching. Hardware switching does decrease latency, but again, just in the microsecond range. The big advantage of software switching over hardware switching is the adaptability: when a combination of features has to be used, or a non-standard implementation is required, software can adapt, whereas hardware cannot.

Advertisements