Not every layer 2 design is the same. There are a lot of features and techniques you can use in a layer 2 LAN, but it really depends on the purpose of the LAN what is going to be effective and what not. Sometimes, it’s better to leave some features out because of unexpected consequences otherwise.
So far, in my experience, I’ve encountered four distinct types of layer 2 networks in practice.
The typical Campus LAN or office network is a network where mostly end users connect. In it’s most simple form, it’s one VLAN where the computers connect. As it grows, it will usually have a second VLAN for IP Phones, and if even larger, separate VLANs for different kinds of users, a separate VLAN for in-office services (think printers, content displaying media, perhaps security camera’s), and in case of a full-scale wireless architecture, a separate VLAN for Lightweight Access Points (LAPs). Typically, DHCP is going to be used a lot here, and users expect a ‘fast user experience’, which usually translates to low-latency, low- to medium bandwidth usage. Only rarely end users require full gigabit connectivity towards the desktop (although they usually think they do).
The following are typical design characteristics of such a Campus LAN:
- The typical access ports with optionally an auxiliary VLAN for Voice. Static configuration, perhaps dynamic VLAN assignment through 802.1x or other means if you’re up to the task.
- Things like ‘switchport nonegotiate’ and ‘no cdp enable’ should be obvious on these access ports. If Cisco IP Phones are used, CDP may be of use though.
- Interesting security features: DHCP Snooping (switch uplinks trusted), activated on client VLANs, port-security, BPDU Guard. Keep in mind port-security will count for any MAC address on any VLAN, so the IP Phone counts as one. Even setting the limit to 5 MAC addresses is better than not setting it at all, as it will counter any MAC exhaust attack.
- If you’re worried about having to go and re-enable a switchport every time BPDU Guard or port-security kicks in, you can configure err-disable recovery. If you don’t think that will happen at all, you have too much confidence in mankind.
- IP Phones require PoE and most models are capped at 100 Mbps, making a gigabit switch redundant if you daisy-chain computers behind the IP Phones. Personally, I like 100 Mbps to desktops in most situations, as applications don’t require more and it’s an easy way to limit one user from pulling too much bandwidth without configuring QoS.
- ARP Inspection, while certainly a good feature, may rarely not work correctly I’ve noticed. Still, a Campus LAN is the most likely place you’ll see an ARP Spoofing attack.
- Think dual stack. I’m going to stress my IPv6 RA Guard post once more to counter any IPv6-related attacks on the subnet. Blocking IP protocol 41 (IPv6 over IP) out of the network will counter any automatic tunneling mechanisms client devices may have (and Windows 7 has one configured by default).
- Taking the above in account, Cisco 2960 and 2960S series are usually perfect for this environment, with 3560v2 and 3750X should layer 3 switches be required.
A Server LAN is a bunch of physical servers connected to switches. A smaller company’s own Server LAN is often just one VLAN for all the servers. If there are internet-faced servers, like web servers or a proxy, they should have a dedicated ‘DMZ‘ VLAN, as these servers are most prone to direct hack attempts. Unlike the Campus LAN, high traffic volumes may occur in this here.
- At least gigabit is needed for a decent server, as multiple users will connect to one server. 100 Mbps is not forbidden, some services barely use bandwidth.
- DHCP Snooping and ARP Inspection are quite useless here. Servers have static IPs, and getting ARP Inspection working in such an environment requires a lot of static entries, configuration overhead, and difficult troubleshooting.
- The above mentioned RA Guard for IPv6 does stay valid, because of the different approach of IPv6. Use with care when used in software though.
- Port-security works, and can map a MAC address to a port. Servers don’t usually move in a physical environment, but in a virtualized environment with vMotion and the like, it’s of not much use.
- Things like ‘switchport nonegotiate’ and ‘no cdp enable’ should be obvious again.
- BPDU Guard, even on trunk links to servers, is a good idea. Some might argue that it’s not good to have an important server disconnected from the network because it happens to send out a BPDU frame by mistake, but I personally don’t consider that a network-related problem.
- Private VLANs can seriously increase security if deployed properly. It’s usually sufficient if the servers can communicate with the gateway. Doesn’t work if the servers need to see each other (a cluster heartbeat for example), and in virtualized environments, as it doesn’t work with VLAN tagging.
- If the budget allows it and you require QoS and bigger buffers, a Cisco 4948 becomes an interesting option.
A Data Center LAN is like a Server LAN, but heavily consolidated. Virtualization places many servers on one physical uplink. While a large company’s data center will not have a large number of VLANs, a colocation data center can have hundreds of VLANs, and even reach the maximum of 4096 VLANs in extreme cases.
- I consider gigabit mandatory, and 10 Gbps is becoming the standard these days. After all, several virtual servers share the link, and FCoE further consumes bandwidth.
- The remaining configuration is like a Server LAN, but because of the shared environment and a lot of trunk links, Private VLANs are not an option. DTP and CDP disabled on the server links, and BPDU Guard are the only usable security features.
- Again, IPv6 RA Guard, although I would recommend either IPv6 stack disabled, or configured static.
- QoS features would be recommended.
- Spanning-tree mode here should be MST. RPVST+ will generate many BPDUs that have to be handled in software.
- Data Center LAN requires data center switches. At least Cisco 4948, but this environment is the home of chassis switches, Cisco 4500, 6500, and the Nexus family.
The last layer 2 network is a core network. It’s an environment that does not do any filtering or functionality other than forwarding as fast as possible, e.g. a large Campus LAN core, or a provider backbone where BGP transit traffic passes.
- This is 10 Gbps or faster.
- As little extra functionality besides forwarding as possible, and if present, done in hardware.
- Cisco’s 6500 chassis has 10 Gbps blades, and even a new 4-port 40 Gbps blade: WS-X6904-40G-2T. Extreme Networks seems to have a more extended portfolio here, with the BlackDiamond X chassis claiming up to 192 40GbE ports.
This is just my opinion on things, a first combination of theoretical knowledge and field experience. If you don’t agree, let me know in the comments – I’m hoping for a discussion on this one.
Great article. Spot on, from designers POV. These kind of networks are usually called ” flat networks ” among networking people. 🙂
@Dino.. just stpped by to see .. awesome right?
i read top buttom. nice one!^^
@Reggle.. grat job m8.. keep rocking :).. bookmarked