white paper performance gains leveraging 10gb ethernet ... · the july 2011 launch of the vmware...

7
Performance Gains Leveraging 10Gb Ethernet Networking in vSphere 5 WHITE PAPER

Upload: doankhanh

Post on 11-May-2018

216 views

Category:

Documents


1 download

TRANSCRIPT

Performance Gains Leveraging 10Gb Ethernet Networking in vSphere 5

W h i t E P a P E r

Emulex White Paper Performance Gains Leveraging 10Gb Ethernet Networking in vSphere 52

Advanced Management Solutions

Introduction

The July 2011 launch of the VMware vSphere® 5.0 which included the ESXi® 5.0 hypervisor along with vCloud® Director™ 1.5 delivered platforms for accelerating data center virtualization and providing the foundation for enterprise cloud computing.

The key attributes of that release package included the following:

n Virtual Machine (VM) performance scalability to handle demanding application workloads

n Deployment agility to rapidly provision and intelligently place resources

n Integrated software suite enabling cloud-scale IT operations

Figure 1 summarizes the VM and Networking performance advances across VMware’s platform generations. The latest release, vSphere 5.1, further evolves the high performance vSphere 5.0 platform while enabling key networking enhancements.1

The increased compute power of a vSphere 5.1 VM, with 64 vCPU support will further catalyze the deployment of high performance application workloads and drive increased virtualization densities placing further demands on networking Input/Output (I/O) infrastructure.

Improvements in vSphere 5, are complemented by new hardware inflection points such as the rollout of Intel’s Xeon® E5 (Romley), multi-core processor, introduction of solid-state and cached solid-state storage arrays, and most importantly, from a networking perspective, adoption of 10Gb Ethernet (10GbE) networking. Collectively, these hardware developments are providing the necessary hardware ecosystem to take greater advantage of new vSphere 5.1.

Note: The term vSphere refers to the complete suite of software including the ESXi ™ hypervisor throughout this paper. Also, vSphere 5 refers to both vSphere 5.0 and vSphere 5.1 unless a specific version is identified.

Figure 1 VMware platform performance evolution.

1 http://files.shareholder.com/downloads/VMW/2004862281x0x529989/9d078424-f135-4a67-9f6c-d6dec83ba04e/FAD%20Preso.pdf

Emulex White Paper Performance Gains Leveraging 10Gb Ethernet Networking in vSphere 53

Advanced Management Solutions

Importance of vSphere Networking—Multiple Use Cases

Networking I/O is both a critical infrastructure element and its workload is exploding. During the five year period from 2010 to 2015, network traffic will quadruple to 4.8 Zettabytes/year,2 with the intra data center portion accounting for 76% of the total. This growth in networking is fueled by a new data center architecture where applications are run across servers and across the network, whether housed in one or many data centers; internally or over a cloud. This is explored in more detail below.

Because of the distributed nature of these applications, driven by virtualization, the network is a core component enabling and defining the overall application performance. Traditionally, network traffic in data centers flowed in a “North-South” direction, between servers and aggregation switches.

A virtualized data center’s traffic is characterized by “East-West” flow, such as between virtual machines (VMs), management traffic between VMs and the VMware vSphere vCenter™ Server, inter-host VMware vMotion™ VM migrations and VM to storage array. Figure 2 is a simplified representation of this evolving data center networking architecture.

Some of the networked traffic flows, also called traffic types within the VMware vSphere context, are briefly described below:

n VMware vMotion traffic—Moving a running VM from one host to another, while allowing its working processes to run during the migration is enabled by vMotion. This migration, achieved over a networked connection, moves the entire state of the VM from the source host to the destination host. Since the state of a VM includes its current memory content and all its configuration information, vMotion traffic can be characterized by high network utilization (for workloads with large memory utilization) and occasionally “bursty”.

n vSphere Fault Tolerance (FT) logging traffic—vSphere Fault Tolerance ensures continuous availability of a VM through the creation and maintenance of a secondary VM that is identical to the primary VM, which can take over if a failover event occurs. Enabling fault tolerance causes logging traffic, which includes network and storage I/O data as well as memory contents of the guest OS, to flow between the primary and secondary VM over a designated vmknic network port. vSphere FT traffic can be characterized by high network utilization with low latency.

Figure 2 Data center East-West networking traffic.

2 Cisco: Cisco Global Cloud Index: Forecast and Methodology, 2010-2015

Emulex White Paper Performance Gains Leveraging 10Gb Ethernet Networking in vSphere 54

Advanced Management Solutions

n Management traffic—Configuration and management communication between ESXi host and vCenter as well as host-to-host High Availability (HA) related communications are examples of management traffic. This traffic flows through a vmknic and is characterized by low network bandwidth requirements but demands high availability.

n iSCSI/NFS traffic—As an alternative to Fibre Channel storage, Ethernet storage traffic between a VM and a Storage Area Network or a Network File Server (NFS) is carried over vmknic ports and this traffic varies according to disk I/O activity. Ethernet storage traffic is characterized by occasional high network utilization and low latency, the latter being particularly important to avoid disruptions in access to storage. With end-to-end jumbo frame configuration, more data is transferred with each Ethernet frame reducing the number of frames on the network.

n Virtual Machine traffic—This is “traditional” data traffic, North-South client-server traffic and its characteristics are dependent on the type of workload running in the VM.

In summary, it becomes readily apparent that networking is at the front and center of data center virtualization, and a resource, as important as CPU and memory in determining the performance of a virtualized data center.

Key vSphere 5.0/5.1 Networking innovations

The vSphere 5.0 and 5.1 platforms have introduced multiple networking innovations, some of which are discussed below. They include:

vSphere Networking I/O Control Network I/O Control (NIOC), originally introduced in vSphere 4.1, is an extension of VMware’s Distributed Resource Scheduler (DRS)—which continuously monitors utilization across a resource pool and intelligently allocates available resources among virtual machines according to business needs, to manage network traffic. Consistent with the importance of networking resources, NIOC enables prioritization for the predefined network traffic types, many of which were discussed above. NIOC has become particularly important with the advent of 10GbE, where a single adapter carries all network traffic.

With vSphere 5, network administrators can now add new traffic types through user defined resource pools. These user defined resource pools could be different tenants in a cloud data center or lines of business in a virtualized enterprise data center.

NIOC, through allocation of the familiar (vis-à-vis DRS) shares and limits values to these user defined resource pools, enables the management of network I/O resources for all traffic granularly down to individual VM workloads within the resource pool. This capability is essential to provide Service Level Agreement (SLA) guarantees for critical application traffic.

Also, to ensure end-to-end Quality of Service (QoS) and compliance to SLAs, in addition to host based I/O resource provisioning, IEEE 802.1p tagging at the MAC level is also required. In vSphere 5, packets going out of the host can be tagged, enabling improved integration with the physical network fabric.

Multiple Network Adapter vMotion vSphere 5 enables vMotion to be accelerated by using multiple adapters concurrently, with support for up to four 10GbE adapters or sixteen 1GbE adapters. This allocation of additional bandwidth results in faster migration time, even for very large and memory active VMs.

This enhancement now enables support for multiple 8Gb/s (approximately) vMotion connections in vSphere 5 compared to a single 8Gb/s connection in vSphere 4.1.3

3 HP Discover 2012, TB#3258: The benefits and right practices of 10GbE networking with VMware vSphere 5

Emulex White Paper Performance Gains Leveraging 10Gb Ethernet Networking in vSphere 55

Advanced Management Solutions

VXLAN Overlay Networking At the time of publication of this paper, the availability of this important new feature, announced in VMworld 20114 is not confirmed but included in this discussion of advanced networking features.

While virtualization has reduced the cost and time required to deploy a new application from weeks to minutes, reducing the costs from thousands of dollars to a few hundred, reconfiguring the network for a new or migrated virtual workload can take a week and cost thousands of dollars.

Scaling existing networking technology for multi-tenant cloud infrastructures requires solutions enabling VM communication and migration across Layer 3 boundaries without impacting connectivity while ensuring isolation for hundreds of thousands of logical network segments and maintaining existing VM addresses and MAC addresses. VXLAN, an IETF proposed standard from VMware, Emulex and other leading organizations addresses these challenges.

Link Layer Discovery Protocol Support (LLDP) Support for the IEEE 802.1AB standard based LLDP protocol enables a network administrator to see which physical switch port is connected to a vSphere Distributed Switch including basic port information. Similarly, vSphere 5 also supports Cisco Discovery Protocol for Cisco infrastructure. This feature minimizes the possibilities of configuration errors.

vSphere Network Rollback vSphere 5.1 allows rollback and recovery from network configuration errors utilizing previous configuration versions. Rollback mitigates the loss of connectivity to a host. There are two types of rollback capabilities:

n Host Networking Rollback—Any network change that disconnects a host triggers a rollback. Examples of configuration changes to the host networking configuration that might trigger a rollback include:

- Changes to the speed or duplex of a physical NIC.

- Removal of a physical NIC that contains the management VMkernel network adapter.

n Distributed Switch Rollback—Incorrect updates made to distributed switches, distributed port groups, or distributed ports trigger a switch rollback. Examples include:

- Changing the Maximum Transmit Unit (MTU).

- Blocking all ports in the distributed port group containing the management VMkernel network adapter.

vSphere Distributed Switch (vDS) Health Check This feature of vSphere 5.1 ensures proper physical and virtual operation by monitoring the health of physical network configurations including VLAN, MTU or Teaming, by identifying and troubleshooting configuration errors. An example of this feature is the following:

n Ensuring the MTU Jumbo Frame setting per VLAN on a physical switch matches the vDS MTU setting.

NetFlow Statistics Collection Support for NetFlow protocol on vSphere 5 enables monitoring and collection of VM IP traffic information. Monitoring application flows over a period of time assists capacity planning and determining if I/O resources are properly utilized by different applications.

Export/Import/Restore Distributed Port Group Settings This feature, available only with the vSphere Web Client 5.1, enables the creation of backups for network settings (configurations) at the Distributed Port Group level and making them available for subsequent deployments.

4 www.vmware.com/company/news/releases/vmw-cisco-vmworld-083011.html

Emulex White Paper Performance Gains Leveraging 10Gb Ethernet Networking in vSphere 56

Advanced Management Solutions

MAC Addresses MAC addresses enable restricting packet transmission to the intended recipient. MAC addresses are generated for virtual network adapters. Alternatively static MAC addresses can be assigned as shown below:

n Prefix-Based MAC Address Allocation—Prefix-based allocation allows specifying an Organizationally Unique Identifier (OUI) other than the VMware default value. This is supported on vSphere 5.1.

n Range-Based MAC Address Allocation—Range-based allocation allows specifying OUI-based Locally Administered Address ranges that can be set to include or exclude specific ranges. MAC addresses are then generated only from within the specified range. This is supported on vSphere 5.1 and later.

Benefits of 10GbE Networking for vSphere 5.0/5.1

The ramp of 10GbE on new servers is the confluence of multiple driving factors:

n Explosive growth of Web 2.0 and customer facing mobile applications

n I/O aggregation resulting from increasing virtualization densities (virtual workloads per physical host), driven by the launch of ever more powerful Intel processors (Intel’s Xeon® E5 [Romley]).

n Cloud driven demand for isolated tenant networks with “future proofed” scalability.

Multiple benefits accrue from deploying 10GbE network adapters, overcoming disadvantages of 1GbE adapters which include:

n Bandwidth limitation—Network bandwidth for any traffic type (VM, vMotion, etc.) is limited to the bandwidth of a single 1GbE adapter even if more bandwidth is available in other adapters used for other traffic.

n Increased complexity—Installing multiple 1GbE adapters in a host server results in complexity in cabling and management and increases the likelihood of configuration errors.

n Increased capital costs (CAPEX)—Installing multiple 1GbE adapters requires more physical switch ports leading to higher CAPEX including additional switches. Also, the decreasing price differential between 1GbE and 10GbE adapters can shift the economics in favor of installing a smaller number of 10GbE adapters while delivering equivalent or greater overall bandwidth.

n Bandwidth under-utilization—Fixed bandwidth allocation to accommodate peak bandwidth for VMware’s different traffic types results in sub-optimal average network bandwidth utilization.

In addition to the reasons above, the greatest benefits of 10GbE networking result from improved performance. An analysis by VMware,5 using the SPECweb2005 (representing a typical data center Web architecture) benchmark, indicated that the time to complete a vMotion under any usage scenario was a factor of eight to ten times better for a 10GbE configuration when compared to 1GbE.

The benefit of 10GbE is also apparent in another use case associated with vSphere FT. A VM performing throughput intensive disk reads causes a lot of logging traffic to flow between the primary and secondary host. In this situation, VMware reluctantly recommends that the secondary host issues its own disk reads to the shared storage, instead of getting the data over the logging network—due to network bandwidth limitations.6 Utilizing a 10GbE adapter would obviate the need to adopt this sub optimal configuration.

One final benefit of using 10GbE adapters is the ability to perform eight simultaneous (concurrent) vMotions7 compared with only four when using a 1GbE adapter.

Conclusions

vSphere 5 adds a plethora of networking features to improve VM performance scalability, VM network configuration management, VM I/O provisioning and VM performance monitoring, enabling end-to-end SLAs for critical workloads. The deployment of cloud based networks is also simplified with vSphere 5. Furthermore, 10GbE network adapters deliver multiple and significant benefits when compared to 1GbE, making them the preferred networking infrastructure choice.

5 www.vmware.com/files/pdf/vmotion-perf-vsphere5.pdf6 http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=10119657 http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.vcenterhost.doc_50%2FGUID-F0C0FFD7-FC60-4CF9-B4E4-106FC1B97730.html

World Headquarters 3333 Susan Street, Costa Mesa, California 92626 +1 714 662 5600Bangalore, India +91 80 40156789 | Beijing, China +86 10 68499547Dublin, Ireland+35 3 (0)1 652 1700 | Munich, Germany +49 (0) 89 97007 177Paris, France +33 (0) 158 580 022 | Tokyo, Japan +81 3 5325 3261Wokingham, United Kingdom +44 (0) 118 977 2929

www.emulex.com

13-0234 · 8/12