white paper unleash the performance of vsphere 5.1 with ... · 2 emulex white paper unleash the...

of 7 /7
Unleash the Performance of vSphere 5.1 with 16Gb Fibre Channel WHITE PAPER

Author: lamnga

Post on 19-Jul-2018




0 download

Embed Size (px)


  • Unleash the Performance of vSphere 5.1 with 16Gb Fibre Channel

    W h i t e P a P e r

  • Emulex White Paper Unleash the Performance of vSphere 5.1 with 16Gb Fibre Channel2

    Advanced Management Solutions


    The July 2011 launch of the VMware vSphere 5.0 which included the ESXi 5.0 hypervisor along with vCloud Director 1.5 delivered platforms for accelerating data center virtualization and providing the foundation for enterprise cloud computing.

    The key attributes of that release package included the following:

    n Virtual machine (VM) performance scalability to handle demanding application workloads

    n Deployment agility to rapidly provision and intelligently place resources

    n Integrated software suite enabling cloud-scale IT operations

    One of the highlights of the vSphere 5.0 release was virtualization performance enhancements to handle demanding application workloads. This resulted in VM support for 32 virtual CPUs (vCPUs) double what was available with vSphere 4.0 and 1TeraByte (TB) of RAM quadrupling the memory limitation of vSphere 4.0. Complementary improvements in storage and networking capabilities supported these increased virtualization capabilities. For example, storage leader EMC achieved a record breaking 1,000,000 input/output operations per second (IOPS) performance on a Symmetrix VMAX storage system using Emulexs LPe12002 8Gb Fibre Channel (8GFC) Host Bus Adapters (HBAs).1

    The latest release, vSphere 5.1, further evolves the high performance vSphere 5.0 platform, doubling vCPU support to 64, and supporting 16GFC link speeds, with the goal of enabling key enterprise storage enhancements. Figure 1 summarizes the VM and IOPS performance advances across VMwares platform generations.2

    Figure 1VMware platform performance evolution.

    The increased compute power of a vSphere 5.1 VM will further catalyze the deployment of high performing application workloads and drive increased virtualization densities placing further demands on Storage Input/Output (I/O) infrastructure.

    Improvements in vSphere 5, are complemented by new hardware inflection points such as the rollout of Intels Xeon E5 (Romley), multi-core processor, introduction of solid-state and cached solid-state storage arrays, and most importantly, from a storage perspective, in-box support for 16GFC storage I/O networking. Collectively, these hardware developments are providing the necessary hardware ecosystem to take greater advantage of vSphere 5.1.

    Note: The term vSphere refers to the complete suite of software including the ESXi hypervisor throughout this paper. Also, vSphere 5 refers to both vSphere 5.0 and vSphere 5.1 unless a specific version is identified.

    1 www.emulex.com/blogs/labs/2011/08/31/emc-achieves-record-breaking-one-million-iops-vsphere-50-emulex-8gb-fibre-channel-hbas

    2 http://files.shareholder.com/downloads/VMW/2004862281x0x529989/9d078424-f135-4a67-9f6c-d6dec83ba04e/FAD%20Preso.pdf


  • Emulex White Paper Unleash the Performance of vSphere 5.1 with 16Gb Fibre Channel3

    Advanced Management Solutions

    Enterprise Class Storage with Shared Storage SANs

    A dedicated, shared Storage Area Network (SAN) which provides servers the access to block level storage, delivers multiple benefits in a vSphere environment including:

    n Performing a live migration using VMware vMotion

    n Providing multiple paths from server to storage eliminating a single point of failure

    n Utilization of lower cost diskless servers and non-disruptive replacement of defective servers with Boot from SAN capability

    n Replication of VMs on multiple host servers using VMware Fault Tolerance

    n Ensuring applications availability and minimizing service disruption with VMware High Availability

    Finally, SANs inherently allow elastic capacity addition as dictated by storage demand, thus delivering on scalability, availability and resource sharing attributes, all core tenets of data center virtualization.

    One of the key aspects of storage for a large enterprise, and more recently, cloud-based data center is the pervasive deployment of FC Host Based Adapters (HBAs) and FC SANs connected to back-end FC storage arrays. FC remains the dominant block storage architecture for large enterprise data centers while iSCSI is popular in smaller deployments for small and medium sized business segment.

    A topological representation of a block storage SAN (FC in this example) is shown in Figure 2.

    Figure 2

    Effective shared storage usage is a key to optimized performance in a virtualized data center. This paper provides an overview of technologies, with emphasis on shared storage, that help ensure an effective implementation of VMware vSphere 5.0/5.1 storage with various Emulex adapters.

  • Emulex White Paper Unleash the Performance of vSphere 5.1 with 16Gb Fibre Channel4

    Advanced Management Solutions

    Key Storage innovations vSphere 5.0/5.1

    vSphere Storage Distributed Resource Scheduler (SDRS) vSphere 5 expanded the Storage Distributed Resource Scheduler (SDRS) functionality to include storage resources. Fundamentally, in addition to CPU and memory resources that are analyzed by the previously available DRS, Storage DRS takes into account storage space and I/O capacity for initial VM placement as well as for making ongoing balancing recommendations.

    A collection of datastores are pooled together to form a Datastore Cluster which becomes the basis of Storage DRS. The aggregate storage resources of the cluster, similar to compute resources, are analyzed and utilized for intelligent VM placement as well as for balancing existing virtualized workloads. The recommendations for ongoing load balancing are based on user-defined or default thresholds for both I/O latency and space utilization.

    vSphere 5.1 enables more granular latency measurement for I/O load balancing called VMobservedLatency. This is achieved by measuring the I/O request-response time between a VM and the datastore. In vSphere 5.0, latency was measured as the I/O request-response time between the host and the datastore.

    Storage I/O Control (SIOC) Rules and policies for storage quality of service (QoS) (i.e. I/O prioritization) can be configured for each VM running in an ESXi server cluster connected to shared FC storage. This feature controls the amount of storage I/O that is allocated to VMs during periods of I/O congestion, which ensures that more important VMs get preference over less important VMs for I/O resource allocation. When I/O congestion is detected, in terms of observed latency between the host and its datastore exceeding a threshold, I/O resources are dynamically reallocated to VMs based on user defined QoS priorities.

    vSphere 5.1 improves SIOC functionality by automatically computing the best latency threshold for a datastore in lieu of using a default or user selected value. This latency threshold is determined by modeling, when 90% of the throughput value is achieved.

    Advanced I/O Device Management vSphere 5.1 introduces new commands for troubleshooting I/O adapters and storage fabrics. This enables diagnosis and querying of FC, and Fibre Channel over Ethernet (FCoE) adapters, providing statistical information that allows the administrator to identify issues along the entire storage chain from the HBA to the ESXi, fabric and storage port.

    Storage vMotion Storage DRS, discussed above, optimally distributes I/O loads through non-disruptive migration of running VM disk files between datastores a process known as Storage vMotion. Additional use cases for Storage VMotion include transitioning to new arrays and migrating VMs to larger capacity or better performing LUNs. FC zoning and LUN masking must be configured to ensure the VM and host server have access to the datastore after the migration is completed.

    vSphere 5 improved Storage vMotion in two ways:

    n VMs with active snapshots can be migrated, a feature unavailable in vSphere 4. This allows co-existence with other VMware products such as VMware Host Replication.

    n A new Mirrored Mode simultaneously writes I/O to both the source and destination disks. Any writes that occur during the Storage vMotion process are written to the source and destination at the same time, with acknowledgements required from both disks to ensure that they have remained synchronized.

  • Emulex White Paper Unleash the Performance of vSphere 5.1 with 16Gb Fibre Channel5

    Advanced Management Solutions

    vSphere Storage APIsArray Integration (VAAI) Originally introduced in vSphere 4 to offload storage tasks to API compliant storage arrays, vSphere 5 offers expanded VAAI functionality, with a key new feature, thin provisioning.

    Thin provisioning allows virtual disks to be created with disk space provisioned for current and future requirements, but initially commits actual disk space only as needed to store data. Thin provisioning cuts out the disk space that is allocated but not used. The result is improvement in disk usage efficiency.

    vSphere 5 VAAI thin provisioning enables reporting and reclamation of storage dead space that results from events such as VM deletion and VM movement as a result of Storage vMotion. This feature allows reuse of the reclaimed disk space.

    With vSphere 5, alarms can be surfaced in VMware vCenter Server when utilization thresholds are exceeded.

    Profile driven storage vSphere 5 vStorage APIs for storage awareness enable intelligent and rapid placement of VMs based on an attribute such as performance or availability. The storage arrays capabilities are requested in a VMs storage profile. Only datastores or datastore clusters compliant to this profile are utilized during initial VM placement, VM cloning and Storage vMotion.

    Benefits of profile driven storage include a more efficient, less manual storage planning and provisioning as well as reduced errors in matching the VM workloads service level agreement to the datastores performance attributes.

    Efficient storage vSphere 5.1 introduces Flexible Space Efficiency (Flex-SE), a disk format to achieve the right balance of space efficiency and I/O throughput. This balance can be managed throughout the life cycle of a VM, from storage allocation (controlling the allocation block size) to how the blocks are managed after they are allocated (deleted blocks can be reclaimed). This feature enables the user to determine the right level of storage efficiency for a deployment. For example, you can use Flex-SE to optimize storage efficiency for virtual desktop infrastructure (VDI).

    NPIV/vPort mapping While N_Port ID Virtualization (NPIV) is not a new capability on vSphere 5, it merits a brief discussion because of the value it can deliver to vSphere FC SAN deployments. NPIV is an ANSI T11 standard describing how a single FC HBA port, with a single physical N_Port worldwide port name (WWPN) can register with the fabric using several virtual (logical) N_Port WWPNs.

    The use of multiple addresses through a single physical HBA port is very valuable in vSphere hosts as it enables zoning and LUN masking, giving each VM specialized access to only its required storage resources.

    NPIV requires a storage configuration called raw device mapping that enables VMs to have direct access to storage LUNs.

    Key Emulex Platforms for vSphere 5.0/5.1

    High performance Emulex LightPulse 16GFC HBAs The launch of vSphere 5.1 intersects the recent 2011/2012 launch of 16GFC, the latest version of FC technology. The scale up enhancements of vSphere 5.1 along with its improvements in agility and availability, coupled with the processor and storage hardware inflection points, collectively provide a plethora of motivations for deploying 16GFC.

    Emulex is a leader in data center-class storage connectivity and the LPe16000 16GFC family is the ninth generation of LightPulse FC HBAs enabling server-to-storage connectivity at double the previous line rate of 8GFC. 16GFC delivers up to 3200 MB/s bi-directional throughput (double 8GFCs 1600 MB/s) and over 1 million IOPS performance per HBA portsupporting deployments of densely virtualized servers, increased scalability and matching the capabilities of multi-core processors and SSD based storage infrastructure.

    Emulex 16GFC adapters are qualified for vSphere 5.1, and certified as VMware Ready I/O devices.

    Note: As of the time of publication Emulex is the only identified provider of in-box FC drivers for 16GFC.

  • Emulex White Paper Unleash the Performance of vSphere 5.1 with 16Gb Fibre Channel6

    Advanced Management Solutions

    Emulexs OneCommand Manager Plug-in for VMware vCenter Server

    OneCommand Manager plug-in for VMware vCenter Server is a native software plug-in that integrates real time lifecycle management of Emulex adapters into the VMware vCenter console as shown in Figure 3. Also available is Emulexs vSphere iSCSI IMA plug-in for VMware vCenter Server integration. This tight integration centralizes and simplifies virtualization management.

    Figure 3

    OneCommand Manager plug-in for VMware vCenter Server builds on Emulex Common Information Model (CIM) providers and established OneCommand Manager features to proactively address key data center issues and improve operational efficiency across VMware hosts and clusters.


    vSphere 5 adds a plethora of features to increase VM performance scalability while simplifying shared storage provisioning based on I/O capacity and disk space utilization with Storage DRS. Underscoring the importance of I/O, new I/O device management capability is now available. Emulex delivers multiple storage I/O hardware platforms to meet the varied requirements of vSphere infrastructure. Emulexs OneCommand software with a vCenter CIM provider plug-in is a common management interface that seamlessly integrates with VMwares vCenter Server management platform.

  • World Headquarters 3333 Susan Street, Costa Mesa, California 92626 +1 714 662 5600Bangalore, India +91 80 40156789 | Beijing, China +86 10 68499547Dublin, Ireland+35 3 (0)1 652 1700 | Munich, Germany +49 (0) 89 97007 177Paris, France +33 (0) 158 580 022 | Tokyo, Japan +81 3 5325 3261Wokingham, United Kingdom +44 (0) 118 977 2929


    13-0208 8/12