ComputerWeekly.com.com

vSwitch best practices: Know what powers your virtualised network

By Rachel Shuster

As an increasing number of servers in the data centre become virtualised, network administrators and engineers are pressed to find ways to better manage traffic running between these machines. Virtual switches (vSwitches) aim to manage and route traffic in a virtual environment, but often network engineers don't have direct access to these switches. When they do, they often find that vSwitches living inside hypervisors don't offer the type of visibility and granular traffic management that they need.

Yet, there are some alternatives. This vSwitch best practices tutorial breaks down what vSwitches do, how they impact the network, their potential challenges, and strategies that network administrators can use to manage vSwitches and virtual traffic effectively.

How virtual switches impact the network

Moving to a virtual environment in the data centre helps to increase speed and utilisation of data moving across the network, but it also poses new network challenges. In a virtualised setting, the network's access layer is pulled into the hypervisor and built-in vSwitches manage the traffic. But these switches have unique issues.

Traditional physical switches determine where to send message frames based on MAC addresses on physical devices. vSwitches act similarly in that each virtual host must connect to a vSwitch the same way a physical host must be connected to a physical switch.

But a closer look reveals major differences between physical and virtual switches. With a physical switch, when a dedicated network cable or switch port goes bad, only one server goes down. Yet with virtualisation, one cable could offer connectivity to 10 or more virtual machines (VMs), causing a loss in connectivity to multiple VMs. What's more, connecting multiple VMs requires more bandwidth, which must be handled by the vSwitch.

These differences are especially apparent in larger networks with more intricate designs, such as those that support VM infrastructure across data centres or disaster recovery sites.

Since vSwitches are manually configured and then managed per ESX host, major misconfigurations or errors can be implemented by an administrator or network engineer without a solid understanding of virtualisation and ESX management.

To help facilitate the move to virtualisation in the data centre, Cisco and VMware developed two technologies to increase the functionality of the vSwitch inside an ESX host. VMware created the idea of a distributed virtual switch (DVS), a vSwitch that spans its ports and management across all ESX servers in the cluster.  Cisco developed the Nexus 1000V, a replacement vSwitch for ESX that gives the network back to the network operations team.

Read more about how vSwitches impact the network in a virtualised data centre.

Virtual switching choices and architecture design considerations

Virtual switches (vSwitches) are the core networking component on a vSphere host, connecting the physical NICs (pNICs) in the host server to the virtual NICs (vNICs) in virtual machines. In planning vSwitch architecture, engineers must decide how they will use physical NICs (pNICs) in order to assign vSwitch port groups to ensure redundancy, segmentation and security. 

There are three kinds of vSphere vSwitches. The vNetwork Standard Switch (vSS) is best suited for small environments, as each vSwitch port must be configured individually on each host. Another option is the vNetwork Distributed vSwitch, which is similar to a standard vSwitch but is configured centrally using vCentre Server. Then there is the Cisco Nexus 1000v, which is a hybrid distributed vSwitch developed by Cisco and VMware to add greater intelligence. Deciding which vSphere vSwitch to choose depends on whether you have a vSphere Enterprise Plus license.

All of these switches support 802.1Q tagging, which allows multiple VLANs to be used on a single physical switch port to reduce the number of pNICs needed in a host. This works by applying tags to all network frames to identify them as belonging to a certain VLAN. Doing this in vSphere requires making sure the modes are located where the tags are applied.

Security is also important with using vSphere vSwitches. Utilising several types of ports and port groups separately rather than all together on a single vSwitch offers higher security and better management. These port types include service console, VMkernel and virtual machine.

VSwitch redundancy is another important consideration. Redundancy is achieved by assigning at least two pNICs to a vSwitch with each pNIC connecting to a different physical switch.

Read more about vSphere vSwitch design considerations.

vSwitch architecture best practices

To execute vSwitch architecture best practices, it is important to understand that vSwitch architecture may vary depending on the kind of traffic being managed. Some vSwitch architecture traffic options include those for ESX, ESXi if using iSCSI/NFS storage, and for VM traffic and network traffic between VMs on the same vSwitch and port group. Depending on what type of traffic you’re dealing with, choose the best vSwitch architecture to maintain redundancy, segmentation and security.

This second part of a two-part tip offers several vSwitch configuration scenarios, as they can vary based on the number of NICs on a host server and differing vSwitch architectures for redundancy, segmentation and security.

Read more about vSwitch architecture best practices.

Virtual networking challenges

Networking teams often lose control over management in a virtualised environment.  In fact, virtualisation can introduce a host of new networking challenges, including limited traffic visibility, the need for new kinds of network policy enforcement, troubling manual vSwitch and network reconfiguration, and strain on I/O bandwidth due to VM migration. Beyond technical challenges, virtualisation can also cause friction between virtualisation and network administrators.

Many of these issues stem from the fact that traffic between VMs on the same host never leaves the server to run over the physical network, making it difficult for networking teams to monitor or manage this traffic. Lack of visibility also means that network firewalls, QoS, ACLs and IDS/IPS systems cannot see this data transfer activity over the physical network.

What's more, both the standard and distributed vSwitches do not have features that lend themselves to easy management. Administrators only have control of the uplink ports from the physical NICs in the host and not the numerous virtual ports that exist on a vSwitch.

To address these issues, networking teams are turning to new network management and security products, such as Reflex System's Virtual Management Centre, Altor Networks’ Virtual Firewall, and Catbird's vSecurity. All are specifically designed to secure, monitor and control virtual network traffic on a host.

Read more about problems with virtualisation networking.

Read more about issues with managing vSwitches.

Network edge technologies improve vSwitch management

As networking professionals deal with a new layer of virtual network switches (vSwitches) in a network environment, issues with management, policy enforcement, security and scalability can potentially surface.

Network engineers may find the solution to such problems with a series of network edge virtualisation technologies.

Some of these technologies include distributed virtual switching (DVS), which allows the data planes of multiple virtual switches to be controlled by an external management system. The Nexus 1000v switch is Cisco’s approach to DVS.

Edge virtual bridging (EVB) helps to relieve vSwitch management issues by offering vSwitch visibility and tighter policy control over virtual machine traffic flow. Ultimately, EVB provides a standards-based solution which eradicates the call for software-based switching within hypervisors.

In the future, single root I/O virtualisation (SR-IOV) may also rectify vSwitch and virtual traffic management problems by moving software-based virtual switch functionality into PCI NIC hardware and giving hardware support for edge networking technologies.

Read more about edge virtualisation technologies for improved virtualisation management.

Maintaining control with distributed virtual switches

Understanding that the mission of the networking team when it comes to virtualisation is to gain better visibility and management of virtual traffic, Cisco created the Nexus 1000v distributed virtual switch (vSwitch). This distributed virtual switch shifts virtual network management inside a virtual host back to network administrators, helping to alleviate tensions between server and network teams and providing tighter security and manageability inside the host.

The Cisco Nexus 1000v distributed virtual switch is comprised of a Virtual Supervisor Module (VSM) and a Virtual Ethernet Module (VEM), which work together with other components of the virtual environment to ensure smooth data transfer.

Cisco also created the Nexus 1010v, a physical version of the VSM virtual appliance for those that aren’t comfortable having their VSMs running as virtual appliances on ESX and ESXi hosts.

Read more about the Cisco Nexus 1000v and the Cisco Nexus 1010v.

Generic VMware vSwitch vs. Cisco Nexus 1000v: Which way to go?

There's a lot to consider in choosing among the default VMware vSwitch, VMware's vNetwork Distributed Switch included in top-of-the-line Enterprise Plus editions, and the third-party Cisco Nexus 1000V.

On one hand, the basic vSwitch is free, has a straightforward and speedy management interface, and supports all the basic features needed from a familiar interface to experienced VMware administrators. Meanwhile, the Cisco Nexus 1000V introduces additional cost and a more intricate management interface, but it provides access to advanced Cisco IOS features, offers potential financial benefits and ensures a deeper level of security.

VMware administrator and expert Bob Plankers and virtualisation author David Davis analyse the pros and cons of both.

Read this Cisco Nexus 1000v vs. VMware vSwitch comparison.

Open vSwitch provides traffic control and visibility for network administrators

The Open vSwitch presents an open source software alternative to the Cisco Nexus 1000v that is specifically aimed at managing multi-tenant public cloud computing environments -- and giving network administrators visibility and control of traffic running between and within virtual VMs. Yet the Open vSwitch, which is backed by Citrix, still doesn't work in a VMware environment.

The Open vSwitch Project, which is backed by network control software startup Nicira Networks, addresses networking professionals’ concerns with virtualisation and works with network controller software to provide distributed virtual switching. This means that the switch and controller software can establish cluster-level network configurations across several servers, which eliminates the need to configure the network separately for each VM and physical machine. The switch also allows for VLAN trunking; visibility via NetFlow, sFlow and RSPAN; and use of the OpenFlow protocol for management.

Some additional features of the Open vSwitch include tight traffic control as a result of the OpenFlow switching protocol and remote management capabilities that allow for more control over implementing network policies.

Now that OpenFlow and software-defined networking in general are getting more play in the networking world, analysts expect the Open vSwitch to gain more traction -- namely in the expansion for use in a vSphere environment.

Read more about the Open vSwitch initiative.

14 Oct 2011

All Rights Reserved, Copyright 2000 - 2024, TechTarget | Read our Privacy Statement