Tag Archive: VLAN


Virtual switching plays an important role in the data center, so I’m going to give a brief overview of the different products. What is virtual switching? Well, a physical server these days usually has a hypervisor as operating system, which has only one function: virtualizing other operating systems to virtual machines that are running on top of the hypervisor. These virtual machines can be Windows, Linux, Solaris, or even other operating systems. These virtual machines need network connectivity. For that, they share one or more physical network interface cards on the server, commonly called a pNIC. To regulate this network traffic, a virtual switch, called a vSwitch, runs in software on the hypervisor and connects these pNICs with the virtual network interface cards of the virtual machines, called vNICs. So it looks like this:

Virtual Network

The blue parts are done in software, only the last part, the pNIC, is physical.

There are three big players in the hypervisor market: Citrix with XenServer, Microsoft with Hyper-V and VMware with ESXi or vSphere. Each has their own implementation of a virtual switch.
Apart from that, Cisco has a Nexus 1000 virtual switch.

Citrix Xenserver
I have no experience with XenServer and so far I’ve found litte information on it. A virtual switch that can be used is Open vSwitch, an open source product which runs on Xen and Virtualbox. I’m not sure if this is the only virtual switch that XenServer supports. Open vSwitch supports a variety of features you would expect from a switch: trunking, 802.1q VLAN tags, link aggregation (LACP), tunneling protocols, SwitchPort ANalyser (SPAN), IPv6, basic QoS. I could not find anything in regard to Spanning Tree Protocol support, so I’m uncertain what will happen if a loop is created to a server with multiple pNICs and no link aggregation configured.

Microsoft’s Hyper-V
Again, I have little real world experience with Hyper-V, and details are not clear, but the virtual switch supports the mandatory 802.1q VLAN tags and trunking. Advanced spanning-tree support is missing as far as I can tell, you can’t manipulate it. I’ve found no information on link aggregation support. It’s a very simple switch compared to the other products. There’s one advantage though: you can run the Routing and Remote Access role on the Windows Server and do layer 3 routing for the VMs, which offers some possibilities for NAT and separate subnets without the need of a separate router. It’s a shame Microsoft decided to no longer support OSPF on their Windows Server 2008, as this might have been a great addition to it, making a vRouter possible. RIPv2 should still work.

VMware’s ESXi and vSphere
The vSwitch developed by VMware is, in my opinion, very good for basic deployment. It supports 802.1q VLAN tags and trunking. It does not support spanning-tree but incoming spanning-tree frames are discarded instead of forwarded. Any frames entering through the pNICs that have the source MAC of one of the virtual machines are dropped. Broadcasts are sent out through only one pNIC. These mechanisms prevent loops from forming in the network. Link aggregation is present but only a static EtherChannel can be formed, which requires some additional planning. QoS is not supported, and no layer 3 functions either.

Nexus 1000 virtual switch
I’m adding the NX1000V to this list, as it is currently one of the few products on the market that can be used as a vSwitch instead of the default hypervisor vSwitch. Currently there’s only support for vSphere, but Cisco announced that there will be support for the Windows Server 8, too.
The NX1000V is supposed to support anything that’s possible with a physical Nexus switch. So compared to the default vSwitch used, it will add support for LACP, QoS, Private VLANs, access control lists, SNMP, SPAN, and so on.

With the ongoing virtualisation of data centers, virtual switching is an emerging market. For those of you interested in it, it’s worth looking into.

Advertisements

First time configuration of Private VLANs.

Today I tried implementing Private VLANs for the first time.

Small explanation for PVLANs: with Private VLANs, you can provide segmentation of your existing VLAN, providing isolation and security for end devices. Devices put on an isolated port can only talk with promiscuous ports: usually the port going towards the gateway router. Community ports can talk with the promiscuous port and all other ports in their own community.

The illustration below is what I have set up to test it.
PVLAN setup.
The switch is a Cisco 3560 series, capable of PVLANs, currently configured with VLAN 1 on all ports. This means that the IP Phone and the computer are in the same VLAN. Not a good practice, but since the router (provided by the ISP) does not support multiple VLANs and trunking, that’s what I have to work with. So to provide some form of security to the IP Phone, I’m going to put it in it’s own isolated PVLAN. The port to the router will be the promiscuous port. This way, the IP Phone will only be able to communicate with the gateway, segmenting it from the rest of the network.

The IP Phone has IP 192.168.0.106 and is connected to FastEthernet 0/2. The router is on FastEthernet 0/1. Before we start implementing the PVLAN, it can be pinged from the computer connected on FastEthernet 0/3.
Succesful ping to the IP Phone.

Warning! Always configure PVLANs through the console port, or through a switchport that will not be affected by the PVLANs, otherwise you’ll lose connectivity during configuration.

First thing to do is putting VTP in transparent mode, as VTP version 1 and 2 don’t support PVLANs:
Switch(config)#vtp mode transparent

Next, we implement the PVLANs on the switch. I’ve choosen VLAN 4 as the promiscuous VLAN. VLAN 41 will be set on the port going to the IP Phone, the isolated port. Finally, all other ports will be put into PVLAN 42, a community VLAN, so all other devices can communicate with each other. The naming is just to make it easy in case of troubleshooting.
Switch(config)#vlan 4
Switch(config-vlan)#name PRIMARY
Switch(config-vlan)#exit
Switch(config)#vlan 41
Switch(config-vlan)#name ISOLATED
Switch(config-vlan)#exit
Switch(config)#vlan 42
Switch(config-vlan)#name COMMUNITY
Switch(config-vlan)#exit

Returning to VLAN 4 and binding all PVLANs together:
Switch(config)#vlan 4
Switch(config-vlan)#private-vlan primary
Switch(config-vlan)#private-vlan association 41,42
Switch(config-vlan)#exit
Switch(config)#vlan 41
Switch(config-vlan)#private-vlan isolated
Switch(config-vlan)#exit
Switch(config)#vlan 42
Switch(config-vlan)#private-vlan community
Switch(config-vlan)#exit

The creation of the VLANs has to be done first, otherwise the ‘association’ command will not work. Once this is done, we start binding ports to PVLANs.
First the port towards the router:
Switch(config)#interface f0/1
Switch(config-int)#switchport mode private-vlan promiscuous
Switch(config-int)#switchport private-vlan mapping 4 41,42

Then the isolated port:
Switch(config)#interface f0/2
Switch(config-int)#switchport mode private-vlan host
Switch(config-int)#switchport private-vlan host-association 4 41

And last, the community ports:
Switch(config)#interface range f0/3 – 24
Switch(config-int)#switchport mode private-vlan host
Switch(config-int)#switchport private-vlan host-association 4 42

That’s it. I can’t ping the IP Phone anymore, but I still have internet connectivity. Calling from the IP Phone works as usual.
No ping to the IP Phone, Google works.

That’s another task completed on my check-list towards CCNP!

VLAN limit and the VXLAN proposal.

Today I stumbled across a nice RFC draft which proposes a new kind of network topology in data centers (thanks to Omar Sultan for the link on his blog). It’s four days old (at the time of writing) and is proposed by some mayor players in the data center market: it mentions Cisco, Red Hat, Citrix and VMware among others.

It proposes the use of VXLANs, or Virtual eXtensible Local Area Networks, which is basically a tunneling method to transport frames over an existing Layer 3 network. Personally, after reading through it, the first thing that came to mind was that this was another way to solve the large layer 2 domain problem that exists in data centers, in direct competition with TRILL, Cisco’s FabricPath, Juniper’s QFabric, and some other (mostly immature) protocols.

But then I realised it is so much more than that. It comes with 24 identifier bits instead of the 12 bits used with VLANs: an upgrade from 4,096 VLANs to 16.7 million VXLANs. Aside from this it also solves another problem: switch CAM tables would no longer need to keep track of all virtual MAC addresses used by VMs, but only the endpoints, which at first sight seem to be the physical servers only (I don’t think this is a big problem already. The draft claims ‘hundreds of VMs on a physical server’, which I find hard to believe, but with the increase of RAM and cores on servers this may become reality soon in every average data center). It also seems to have efficient mechanisms proposed for Layer 2 to Layer 3 address mapping and multicast traffic. Since it creates a Layer 2 tunnel, it would allow for different Layer 3 protocols as well.

Yet I still see some unsolved problems. What about QoS? Different VMs may need different QoS classifications. I also noticed the use of UDP, which I understand because this does not have the overhead of TCP, but I don’t feel comfortable sending important data on a best-effort basis. There is also no explanation what impact it will have on link MTU, though this is only a minor issue.

In any way, it’s an interesting draft, and time will tell…