vmware vsphere best practices for ibm san volume controller and

38
VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family Rawley Burbridge Jeremy Canady IBM Systems and Technology Group ISV Enablement September 2013 © Copyright IBM Corporation, 2013

Upload: nguyenkhuong

Post on 07-Jan-2017

265 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: VMware vSphere best practices for IBM SAN Volume Controller and

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize

family

Rawley Burbridge Jeremy Canady

IBM Systems and Technology Group ISV Enablement

September 2013

© Copyright IBM Corporation, 2013

Page 2: VMware vSphere best practices for IBM SAN Volume Controller and

Table of Contents Abstract........................................................................................................................................2 Introduction .................................................................................................................................2

Guidance and assumptions ..................................................................................................................... 3 Introduction to VMware vSphere ...............................................................................................4

Infrastructure services.............................................................................................................................. 4 Application services ................................................................................................................................. 4 VMware vCenter Server........................................................................................................................... 5

VMware storage-centric features...............................................................................................5 VMFS ....................................................................................................................................................... 6 Storage vMotion and Storage Dynamic Resource Scheduler (DRS) ...................................................... 6 Storage I/O Control .................................................................................................................................. 8

Storage and connectivity best practices ..................................................................................8 Overview of VMware Pluggable Storage Architecture............................................................................. 8

Storage Array Type Plug-in ............................................................................................... 9 Path Selection Plug-in ..................................................................................................... 10

VMware ESXi host PSA best practices ................................................................................................. 12 Fixed PSP – vSphere 4.0 through vSphere 5.1 default behavior ................................... 12 Round Robin PSP – Recommendation – vSphere 5.5 default behavior......................... 12 Tuning the Round Robin I/O operation limit .................................................................... 14

VMware ESXi host Fibre Channel and iSCSI connectivity best practices............................................. 15 Fibre Channel connectivity .............................................................................................. 15 iSCSI connectivity............................................................................................................ 17

General storage best practices for VMware .......................................................................................... 23 Physical storage sizing best practices............................................................................. 23 Volume and datastore sizing ........................................................................................... 24 Thin provisioning with VMware........................................................................................ 24 Using Easy Tier with VMware ......................................................................................... 25 Using IBM Real-Time Compression with VMware .......................................................... 25

VMware storage integrations ................................................................................................................. 26 VMware vSphere Storage APIs for Array Integration...................................................... 26 IBM Storage Management Console for VMware vCenter ............................................... 30 VMware vSphere APIs for Data Protection ..................................................................... 32

Summary....................................................................................................................................34 Resources..................................................................................................................................35 Trademarks and special notices..............................................................................................36

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 1

Page 3: VMware vSphere best practices for IBM SAN Volume Controller and

Abstract The purpose of this paper is to provide insight into the value proposition of the IBM System Storage SAN Volume Controller (SVC) and IBM Storwize family for VMware environments, and to provide best-practice configurations.

Introduction The many benefits that server virtualization provides has led to its explosive adoption in today’s data center. Server virtualization with VMware vSphere has been successful in helping customers use

hardware more efficiently, increase application agility and availability, and decrease management and other costs.

The IBM® System Storage® SAN Volume Controller is an enterprise-class storage virtualization system

that enables a single point of control for aggregated storage resources. SAN Volume Controller consolidates the capacity from different storage systems, both IBM and non-IBM branded, while enabling common copy functions and non-disruptive data movement, and improving performance and availability.

The IBM Storwize® disk family has inherited the SAN Volume Controller software base, and as such offers many of the same features and functions.

The common software base of the SAN Volume Controller and IBM Storwize family allows IBM to

conveniently offer the same VMware support, integrations, and plug-ins. This white paper focuses on and provides best practices for the following key components:

IBM System Storage SAN Volume Controller

IBM System Storage SAN Volume Controller combines hardware and software into an integrated, modular solution that forms a highly-scalable cluster. SVC allows customers to manage all of the storage in their IT infrastructure from a single point of control and also increase the utilization,

flexibility, and availability of storage resources. For additional information about IBM SAN Volume Controller, refer to the following URL: ibm.com/systems/storage/software/virtualization/svc/index.html

IBM Storwize V7000 and Storwize V7000 Unified systems

The IBM Storwize V7000 system provides block storage enhanced with enterprise-class features to midrange customer environments. With the built-in storage virtualization, replication capabilities,

and key VMware storage integrations, the Storwize V7000 system is a great fit for VMware deployments. The inclusion of IBM Real-time Compression™ further enhances the strong feature set of this product.

The IBM Storwize V7000 Unified system builds upon the Storwize V7000 block storage capabilities by also providing support for file workloads and file specific features such as IBM Active Cloud Engine™. For additional information about the IBM Storwize V7000 system, refer to

the following URL: ibm.com/systems/storage/disk/storwize_v7000/index.html

IBM Flex System V7000 Storage Node

IBM Flex System™ V7000 Storage Node is a high-performance block-storage solution that has been designed to integrate directly with IBM Flex System. IBM Flex System V7000 Storage Node

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 2

Page 4: VMware vSphere best practices for IBM SAN Volume Controller and

provides advanced storage capabilities, such as IBM System Storage Easy Tier®, IBM Real-Time Compression, thin provisioning, and more. For more information regarding the advanced

capabilities of IBM Flex System V7000 Storage Node, refer to: ibm.com/systems/flex/storage/v7000/index.html.

IBM Storwize V5000

The IBM Storwize V5000 system provides cost efficient midrange storage. Built from the same technology as the IBM SAN Volume Controller and IBM Storwize V7000, the IBM Storwize V5000 offers advanced storage features in a cost-efficient solution. For more information, refer to:

ibm.com/systems/hk/storage/disk/storwize_v5000/index.html

IBM Storwize V3700

The IBM Storwize V3700 system is an entry-level storage system designed for ease of use and

affordability. Built from the same technology used in all of the Storwize family, the Storwize V3700 system offers some of the advanced features that can be found in other Storwize models. For more details, refer to: ibm.com/systems/storage/disk/storwize_v3700/index.html

VMware vSphere 5.5

VMware vSphere 5.5 (at the time of this publication) is the latest version of a market-leading virtualization platform. vSphere 5.5 provides server virtualization capabilities and rich resource

management. For additional information about VMware vSphere 5.5, refer to the following URL:

www.vmware.com/products/vsphere/mid-size-and-enterprise-business/overview.html

Guidance and assumptions The intent of this paper is to provide architectural, deployment, and management guidelines for customers who are planning or have already decided to implement VMware on the IBM SVC or Storwize family. It provides a brief overview of the VMware technology concepts, key architecture considerations, and

deployment guidelines for implementing VMware. This paper does not provide detailed performance numbers or advanced high availability and disaster

recovery techniques. This paper is also not intended to be any type of formal certification. For detailed information regarding hardware capability and supported configurations, refer to the VMware hardware compatibility list and IBM System Storage Interoperation Center (SSIC) websites.

VMware hardware compatibility list URL: http://www.vmware.com/resources/compatibility/search.php

IBM SSIC URL: ibm.com/systems/support/storage/ssic/interoperability.wss This paper assumes users with essential knowledge in the following areas as a prerequisite:

VMware vCenter Server ESXi installation Virtual Machine File System (VMFS) and raw device mapping (RDM)

VMware Storage VMotion, High Availability (HA), and Distributed Resource Scheduler (DRS)

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 3

Page 5: VMware vSphere best practices for IBM SAN Volume Controller and

Introduction to VMware vSphere VMware vSphere is a virtualization platform capable of transforming a traditional data center and

industry-standard hardware into a shared mainframe-like environment. Hardware resources can be pooled together to run varying workloads and applications with different service-level needs and performance requirements. VMware vSphere is the enabling technology to build a private or public cloud infrastructure.

The components of VMware vSphere fall into three categories: Infrastructure services, application services, and VMware vCenter Server. Figure 1 shows a representation of the VMware vSphere platform.

Figure 1: VMware vSphere platform

Infrastructure services

Infrastructure services perform the virtualization of server hardware, storage, and network resources. The

services within the infrastructure services category are the foundation of the VMware vSphere platform.

Application services

The components categorized as application services address availability, security, and scalability concerns for all applications running on the vSphere platform, regardless of the complexity of the application.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 4

Page 6: VMware vSphere best practices for IBM SAN Volume Controller and

VMware vCenter Server

VMware vCenter Server provides the foundation for the management of the vSphere platform. VMware vCenter Server provides centralized management of configurations and aggregated performance statics

for clusters, hosts, virtual machines, storage, and guest operating systems. VMware vCenter Server scales to provide management of large enterprises, granting administrators the ability to manage more than 1,000 hosts and up to 10,000 virtual machines from a single console.

VMware vCenter Server is also an extensible management platform. The open plug-in architecture allows VMware and its partners to directly integrate with vCenter Server, extending the capabilities of the vCenter platform, and adding additional functionality.

Figure 2 shows the main pillars of the functionality provided by VMware vCenter Server.

Figure 2: Pillars of VMware vCenter Server

VMware storage-centric features Since its inception, VMware has pushed for advancements in storage usage for virtualized environments.

VMware uses a purpose built, virtualization friendly, clustered file system which is enhanced with storage-centric features and functionality aiming to ease the management and maximize the performance of the storage infrastructure used by virtualized environments. VMware has also led the industry in working with

partners to create integrations between storage systems and VMware. The following sections outline some of the storage features provided by VMware.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 5

Page 7: VMware vSphere best practices for IBM SAN Volume Controller and

VMFS

VMware VMFS is a purpose built file system for storing virtual machine files on Fibre Channel (FC) and iSCSI-attached storage. It is a clustered file system, meaning multiple vSphere hosts can read and write to

the same storage location concurrently. vSphere host access can be added or removed from the VMFS volume without any impact. The file system used disk file locking to ensure that multiple vSphere hosts do not access the file at the same time. For example, this ensures that a virtual machine is powered on by

only one vSphere host.

VMware VMFS has undergone many changes and enhancements since the inception of VMFS version 1 with ESX server version 1. The latest version of VMFS (at the time of this publication), version 5.0,

includes many enhancements to increase scalability and performance of VMFS. Table 1 provides a comparison of VMFS-5 compared to the previous version, VMFS-3.

Feature VMFS-3 VMFS-5

64 Terabyte VMFS volumes Yes (requires 32 extents) Yes (single extent)

Support for more files 30720 130690

Support for 64 TB physical raw device mappings

No Yes

Unified block size (1 MB) No Yes

Atomic test and set (ATS) usage

VMware vStorage API for Array

Integration (VAAI) locking mechanism

Limited Unlimited

Sub-blocks for space efficiency 64 KB

(maximum approximately 3 k)

8 KB

(maximum approximately 30 k)

Small file support No 1 KB

Table 1: Comparing VMware VMFS-3 with VMFS-5

VMware provides a nondisruptive upgrade path between the various versions of VMFS.

Storage vMotion and Storage Dynamic Resource Scheduler (DRS)

VMware Storage vMotion is a feature that was added with experimental support in VMware ESX 3.5, and made its official debut in vSphere 4.0. Storage vMotion provides the capability to migrate a running virtual

machine between two VMFS volumes without any service interruption. VMware administrators have had the vMotion capability, which migrates a virtual machine between two vSphere hosts for some time. Storage vMotion introduced the same functionality and use cases for migrating virtual machines between

storage systems.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 6

Page 8: VMware vSphere best practices for IBM SAN Volume Controller and

VMware has built upon the Storage vMotion functionality with a new feature first introduced in vSphere 5.0, Storage DRS. Storage DRS creates the following use cases for using Storage vMotion:

Initial virtual machine placement – When creating a virtual machine, users can now select a VMFS datastore cluster object rather than an individual VMFS datastore for placement. Storage DRS can choose the appropriate VMFS datastore to place the virtual machine based on space

utilization and I/O load. Figure 3 provides an example of initial placement.

Figure 3: VMware Storage DRS initial placement

Load balancing – Storage DRS continuously monitors VMFS datastore space usage and latency.

Configurable presets can trigger Storage DRS to issue migration recommendations when response time and or space utilization thresholds have been exceeded. Storage vMotion is used to migrate virtual machines to bring the VMFS datastores back into balance. Figure 4 provides an

example of Storage DRS load balancing.

Figure 4: VMware Storage DRS load balancing

VMFS datastore maintenance mode – The last use case of Storage DRS is a way to automate

the evacuation of virtual machines from a VMFS datastore which needs to undergo maintenance. Previously, each virtual machine would need to be migrated manually from the datastore. Data store maintenance mode allows the administrator to issue the command to place the datastore in

maintenance mode, and Storage DRS migrates all the virtual machines from the datastore.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 7

Page 9: VMware vSphere best practices for IBM SAN Volume Controller and

Storage I/O Control

The Storage I/O Control feature eliminates the noisy neighbor problem that can exist when many workloads (virtual machines) access the same resource (VMFS datastore). Storage I/O Control allows

administrators to set share ratings on virtual machines to ensure that virtual machines are getting the required amount of I/O performance from the storage. The share rating works across all vSphere hosts that are accessing a VMFS datastore. Virtual machine disk access is manipulated by controlling the host

queue slots. Figure 5 shows two examples of virtual machines accessing a VMFS datastore. Without Storage I/O Control enforcing share priority, a non-production data mining virtual machine can monopolize disk resources, impacting the production virtual machines.

Figure 5: VMware Storage I/O Control

Storage and connectivity best practices IBM System Storage SVC and the Storwize family use the same software base and host connectivity

options. This common code base allows IBM to provide consistent functionality and management across multiple products. It also means that from a VMware ESXi host perspective, each of the storage products that share this code base appear as the same storage type and have consistent best practices. The

following sections outline the best practices for VMware with SVC and Storwize family.

Overview of VMware Pluggable Storage Architecture

VMware vSphere 4.0 introduced a new storage architecture called Pluggable Storage Architecture (PSA). The purpose was to use third-party storage vendor multipath software capabilities through a modular

architecture that allows partners to write a plug-in for their specific array capabilities. These modules can communicate with the intelligence running in the storage system to determine the best path selection or to coordinate proper failover behavior. Figure 6 provides a diagram of the VMware PSA; the modules and

their purpose are highlighted in the following sections.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 8

Page 10: VMware vSphere best practices for IBM SAN Volume Controller and

Figure 6 VMware Pluggable Storage Architecture diagram

Storage Array Type Plug-in

The Storage Array Type Plug-in (SATP) is a module that can be written by storage partners for their specific storage systems. A SATP module provides intelligence of the storage system to the VMware

ESXi hypervisor (VMkernel) including characteristics of the storage system and any specific operations required to detect path state and initiate failovers.

vSphere 5.1 SATP

IBM SAN Volume Controller and the Storwize family use the SATP called VMW_SATP_ALUA with vSphere 5.5 and VMW_SATP_SVC on vSphere 4.0, 5.0, and 5.1. When a volume is provisioned from SVC or the Storwize family and mapped to a vSphere 4.0 or newer ESXi host,

the volume is automatically assigned the correct SATP. The SATP configured on a volume can be viewed in the Manage Paths window, as displayed in Figure 7 and Figure 8.

Figure 7: Properties of Storwize V7000 assigned volume – SATP example

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 9

Page 11: VMware vSphere best practices for IBM SAN Volume Controller and

Figure 8: Properties of Storwize V7000 assigned volume using web client - SATP example

In addition to providing intelligence of the storage system, the SATP also contains default settings for what Path Selection Plug-in (PSP) is used by the ESXi host for each storage volume.

Path Selection Plug-in

The PSP is the multipath policy used by the VMware ESXi host to access the storage volume. VMware Native Multipathing Plug-in (NMP) offers three different PSPs which storage partners can

choose to use based on the characteristics of the storage system. The three PSPs are:

Most Recently Used – When the Most Recently Used (MRU) policy is used, the ESXi host selects and begins using the first working path discovered on boot, or when a new volume is

mapped to the host. If the active path fails, the ESXi host switches to an alternative path and continues to use it regardless of whether the original path is restored. VMware uses the name VMW_PSP_MRU for the MRU policy.

Fixed – When the Fixed policy is used, the ESXi host selects and begins using the first working path discovered on boot or when a new volume is mapped to the host, and also marks the path as preferred. If the active preferred path fails, the ESXi host switches to an alternative path. The ESXi

host automatically reverts back to the preferred path when it is restored. VMware uses the name VMW_PSP_FIXED for the Fixed policy.

Round Robin – When the Round Robin policy is used, the ESXi host selects and begins using

the first working path discovered on boot or when a new volume is mapped to the host. By default, 1,000 I/O requests are sent down the path before the next working path is selected. The ESXi host

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 10

Page 12: VMware vSphere best practices for IBM SAN Volume Controller and

continues this cycle through all available Active (I/O) paths. Failed paths are excluded from the selection until restored. VMware uses the name VMW_PSP_RR for the Round Robin policy.

The PSP configured on a volume can be viewed in the Manage Paths window, as displayed in Figure 9 and Figure 10.

Figure 9: Properties of Storwize V7000 assigned volume – PSP example

Figure 10: Properties of Storwize V7000 assigned volume using the web client - PSP example

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 11

Page 13: VMware vSphere best practices for IBM SAN Volume Controller and

VMware ESXi host PSA best practices

The IBM SAN Volume Controller and Storwize family products are all supported to use any of the three offered PSPs (VMW_PSP_MRU, VMW_PSP_FIXED, and VMW_PSP_RR). However, there are IBM

recommended best practices regarding which PSP customers need to use in their environments.

Fixed PSP – vSphere 4.0 through vSphere 5.1 default behavior

Volumes provisioned from IBM SAN Volume Controller and the Storwize family to a vSphere 4.0

through vSphere 5.1 ESXi host are assigned the SATP of VMW_SATP_SVC, which uses a PSP default of VMW_PSP_FIXED. As previously mentioned, the Fixed PSP selects the first discovered working path as the preferred path. Customers need to be aware that if the Fixed PSP is used, the

preferred path used by each volume must be evenly distributed. This ensures that the paths that are active are evenly balanced. The preferred path can be modified in the Manage Paths window.

Right-click the path that you need to set as the new preferred path, and then click Preferred, as

shown in Figure 11. This is a nondisruptive change to the active VMFS datastores.

Figure 11: Modifying the preferred path

Managing the preferred paths across hosts and VMFS datastores can become an unnecessary

burden to administrators, and this is why IBM recommends modifying the default behavior of vSphere 4.0 through vSphere 5.1

In some configurations, it might be beneficial to use fixed paths. vSphere 5.5 ESXi hosts will default to

the recommended Round Robin PSP. This can be changed using either the web client or the .NET client. At the time of this publication, the web client cannot set the preferred paths. For this reason, it is recommended to use the .Net client to change the pathing configuration and define preferred paths.

Round Robin PSP – Recommendation – vSphere 5.5 default behavior

Using Round Robin PSP ensures that all paths are equally used by all the volumes provisioned from IBM SAN Volume Controller or the Storwize family. Volumes provisioned from the IBM SAN Volume

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 12

Page 14: VMware vSphere best practices for IBM SAN Volume Controller and

Controller or the Storwize family to a vSphere 5.5 ESXi host are assigned the SATP of VMW_SATP_ALUA, which uses the PSP default of VMW_PSP_RR. vSphere 4.0 through vSphere

5.1 ESXi hosts do not default to VMW_PSP_RR but the default behavior can be changed in the following ways:

Modify volume-by-volume – This method can be performed on active VMFS datastores

nondisruptively. However, it does not impact the new volumes assigned to an ESXi host. The PSP can be modified from the Manage Paths window.

From the Path Selection list, select Round Robin (VMware) and click Change, as shown in

Figure 12.

Figure 12: Modifying the PSP for an individual volume

IBM Storage Management Console for VMware vCenter – The IBM Storage

Management Console for VMware vCenter version 3.2.0 includes the ability to set

multipath policy enforcement. This setting can enforce the Round Robin policy on all new volumes provisioned through the management console. This selection option is displayed in Figure 13.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 13

Page 15: VMware vSphere best practices for IBM SAN Volume Controller and

Figure 13: Multipath policy enforcement with management console

Modify all volumes – This method modifies the default PSP to be Round Robin. All new

discovered volumes use the new default behavior; however, existing volumes are not modified until the ESXi host rediscovers all the volumes as is done during a reboot. You can modify the default behavior using the following vSphere CLI commands:

ESX/ESXi 4.x: esxcli nmp satp setdefaultpsp –psp VMW_PSP_RR --satp VMW_SATP_SVC

ESXi 5.0/5.1: esxcli storage nmp satp set --default-psp VMW_PSP_RR --satp

VMW_SATP_SVC

ESXi 5.5: No action is required.

This method is recommended by IBM for simplicity and global enforcement.

Tuning the Round Robin I/O operation limit

As the name implies, the Round Robin PSP uses a round-robin algorithm to balance the load across all active storage paths. A path is selected and then used until a specific quantity of data has been

delivered or received. After that quantity has been reached, the PSP will select the next path in the list and begin using it. The quantity at which a path change is triggered is known as the limit. The Round Robin PSP supports two types of limits, bytes and IOPS.

IOPS limit - The Round Robin PSP defaults to an IOPS limit with a value of 1000. In this default case, a new path will be used after 1000 I/O operations have been issued.

bytes limit - The bytes limit is an alternative to the IOPS limit. The bytes limit allows for a

specified amount of bytes to be transferred before the path is switched.

Adjusting the limit can provide a dramatic impact to performance in specific use cases. For example, the default limit of 1000 input/output operations per second (IOPS) will send 1000 I/O down each path

before switching. If the load is such that a portion of the 1000 IOPS can saturate the bandwidth of the path, the remaining I/O must wait even if the storage system could service the requests. On an average, this waiting results in a total throughput, roughly equal to one path. This result can be seen

with a large number of large I/O or an extremely large number of small I/O. The IOPS or bytes limit can be adjusted downward allowing the path to be switched at a more frequent rate. The adjustment allows the bandwidth of additional paths to be used while the other path is currently saturated.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 14

Page 16: VMware vSphere best practices for IBM SAN Volume Controller and

Although significant in specific cases, the overall impact of the operation limit is small in the general case. The limit applies only to a single volume connected to a single host. In a vSphere environment, a

volume is generally shared by many hosts and hosts access many volumes. The cumulative bandwidth available to the volumes quickly exceeds the physically available bandwidth of a host. For this reason, the default value of 1000 IOPS is recommended.

In limited cases, a single host may need to use more than a paths worth of bandwidth to a single volume. In those cases, the IOPS or bytes limit can be lowered as needed to achieve the required path utilization.

Modify Round Robin I/O operation limit with ESXCLI - IOPS

The Round Robin I/O operation limit can be modified using ESXCLI. The ESXCLI commands can be run from the vCLI, ESXi Shell, or through the PowerCLI. The following example shows how to

modify the I/O operation limit to a value of 3 and bytes to 96 KB:

# esxcli storage nmp psp roundrobin deviceconfig set --type "iops" --iops 3 --device=naa.6005076000810006a800000000000003

# esxcli storage nmp psp roundrobin deviceconfig set --type "bytes" --bytes 98304 --device=naa.6005076000810006a800000000000003

VMware ESXi host Fibre Channel and iSCSI connectivity best practices

IBM SAN Volume Controller and the Storwize family support the FC and iSCSI protocols for block storage connectivity with VMware ESXi. Each protocol has its own unique best practices, which is covered in the following sections.

Fibre Channel connectivity

IBM SAN Volume Controller and the Storwize family can support up to 512 hosts and 1024 distinct configured host worldwide port names (WWPNs) per I/O group. Access from a host to a SAN Volume

Controller cluster or a Storwize family system is defined by means of switch zoning. A VMware ESXi host switch zone must contain only ESXi systems. Host operating system types must not be mixed within switch zones. VMware ESXi hosts follow the same zoning best practices as other operating

system types. You can find further details on zoning in the Implementing the IBM System Storage SAN Volume Controller V6.3 IBM Redbooks® guide at the following URL: ibm.com/redbooks/redbooks/pdfs/sg247933.pdf

A maximum of eight paths from a host to SAN Volume Controller or the Storwize family is supported. However, IBM recommends that a maximum of four paths, two for each node, be used. The VMware storage maximums dictate that a maximum of 256 logical unit numbers (LUNs) and 1024 paths be

used per ESXi server. If the IBM recommendation is followed, it ensures that the maximum number of LUNs and paths can be used for each ESXi host accessing SAN Volume Controller or the Storwize family.

VMware ESXi hosts must use the generic host object type for SAN Volume Controller and the Storwize family. As VMware is generally configured to allow multiple hosts to access the same

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 15

Page 17: VMware vSphere best practices for IBM SAN Volume Controller and

clustered volumes, two different approaches can be used for ensuring that consistency is maintained when creating host objects and volume access.

Single ESXi host per storage host object

The first approach is to place the WWPNs of each VMware ESXi host in its own storage host object. Figure 14 provides an example of this type of setup. Two VMware ESXi hosts are

configured, each with its own storage host object. These two VMware ESXi hosts share the same storage and are part of a VMware cluster.

Figure 14: Unique storage host object per ESXi host

The advantage of this approach is that the storage host definitions are very clear to create and maintain. The disadvantage, however, is when volume mappings are created they must be created for each storage host object. It is also important, but not required, to use the same SCSI-

LUN numbers across VMware ESXi hosts. This is more difficult to maintain with the single VMware ESXi host per storage object method.

VMware ESXi cluster per storage host object

An alternative way to manage storage host object is during instances where VMware ESXi hosts are in the same cluster, a single storage host object is created for the cluster, and all VMware ESXi host WWPNs are placed in the storage host object. Figure 15 provides an example of the

previously used VMware ESXi hosts being placed in a single storage host object.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 16

Page 18: VMware vSphere best practices for IBM SAN Volume Controller and

Figure 15: Single storage host object for multiple ESXi hosts

The advantage of this approach is that volume mapping is simplified because a single mapping is

performed for the VMware ESXi cluster against a per-host basis. The disadvantage of this approach is that the storage host definitions are not as clear as expected. If a VMware ESXi host is being retired, the WWPNs for that host must be identified and removed from the storage host

object.

Both of the storage host object approaches are valid, and the advantages and disadvantages should be weighed by each customer. IBM recommends that a single approach is chosen and

implemented on a consistent basis.

iSCSI connectivity

VMware ESXi includes the, iSCSI initiator software, which can be used with 1 GbE or 10 GbE Ethernet

connections. The VMware ESXi iSCSI initiator is the only supported way to connect to SAN Volume Controller or the Storwize family. Each VMware ESXi host can have a single iSCSI initiator and that initiator provides the source iSCSI qualified name (IQN).

A single iSCSI initiator does not limit the number or type of network interface cards (NICs), which can be used for iSCSI storage access. VMware best practice recommends that for each physical NIC which will be used, a matching virtualized VMkernel port NIC is created and bonded to the physical

NIC. The VMkernel port is assigned an IP address, while the physical NIC acts as a virtual switch uplink. Figure 16 provides an example of a VMware ESXi iSCSI initiator configured with two VMkernel IP addresses, and bonded to two physical NICs.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 17

Page 19: VMware vSphere best practices for IBM SAN Volume Controller and

Figure 16: VMware iSCSI software initiator port binding

The VMware ESXi iSCS initiator supports two types of storage discovery, which is covered in the following sections.

Static iSCSI discovery

With the static discovery method, the target iSCSI ports on the SAN Volume Controller or Storwize family systems are entered manually into the iSCSI configuration. Figure 17 provides an example of the configuration type.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 18

Page 20: VMware vSphere best practices for IBM SAN Volume Controller and

Figure 17: Static iSCSI discovery

Figure 18: Static iSCSI discovery through web client

With static discovery, each source VMkernel IP address performs an iSCSI session login to each of the static target port IPs. Each time a login occurs, an iSCSI session is registered on the node

in which the login occurred. The following example provides an overview of the sessions created with static discovery.

Source VMkernel IPs of:

1.1.1.1

1.1.1.2

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 19

Page 21: VMware vSphere best practices for IBM SAN Volume Controller and

SAN Volume Controller or Storwize family system target IPs of:

1.1.1.10

1.1.1.11

1.1.1.12

1.1.1.13

Sessions are created for each source and target port combination resulting in eight total sessions, four on each controller. The following sessions would be created for Node-1. Similar sessions would be created for Node-2.

Vmk-0 (1.1.1.1) to Port-0 (1.1.1.10)

Vmk-0 (1.1.1.1) to Port-1 (1.1.1.11)

Vmk-1 (1.1.1.2) to Port-0 (1.1.1.10)

Vmk-1 (1.1.1.2) to Port-1 (1.1.1.11)

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 20

Page 22: VMware vSphere best practices for IBM SAN Volume Controller and

Figure 19: Sessions created with iSCSI discovery

This static discovery configuration results in four sessions per node.

Dynamic iSCSI discovery

Dynamic iSCSI discovery simplifies the setup of the iSCSI initiator because only one target IP address must be entered. The VMware ESXi host queries the storage system for the available target IPs, which will all be used by the iSCSI initiator. Figure 20 provides an example of the

dynamic iSCSI discovery configuration within VMware ESXi.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 21

Page 23: VMware vSphere best practices for IBM SAN Volume Controller and

Figure 20: Dynamic iSCSI discovery

Figure 21: Dynamic iSCSI discovery through web client

When using iSCSI connectivity for VMware ESXi along with SAN Volume Controller and the Storwize family systems, the important thing to note is how many iSCSI sessions are created on each node.

For storage systems running software older than 6.3.0.1, only one session can be created on each node. This means that the following rules must be followed:

Maximum of one VMware iSCSI initiator session per node.

Static discovery only, dynamic discovery is not supported. VMware ESXi host iSCSI initiator can have only one VMkernel IP associated with one

physical NIC.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 22

Page 24: VMware vSphere best practices for IBM SAN Volume Controller and

For storage systems running software version 6.3.0.1 or later, these rules are changed as provided in the following list:

Maximum of four VMware iSCSI initiator sessions per node. Both, static and dynamic discovery are supported. VMware ESXi host iSCSI initiator can have up to two VMkernel IPs associated with two

physical NICs.

The configuration presented in Figure 19 provides a redundancy for the VMware ESXi host because two physical NICs are actively used. It also uses all available ports of both nodes of the storage

system. Regardless of whether static or dynamic discovery is used, this configuration stays within the guidelines of four sessions per node.

General storage best practices for VMware

Storage for virtualized workloads requires considerations that might not be applicable to other workload types. Virtualized workloads often contain a heterogeneous mixture of applications, sometimes with disparate storage requirements. The following sections outline VMware storage sizing best practices on

SAN Volume Controller and the Storwize family, and also detail how to use thin provisioning, IBM System Storage Easy Tier, and compression with VMware virtualized workloads.

Physical storage sizing best practices

There are two basic attributes to consider when sizing storage; storage capacity and storage performance. Storage performance does not scale with drive size, meaning larger drives generally have more capacity with less performance, while smaller drives generally have less capacity with more

performance. For example, to meet a storage capacity requirement of 1 TB, the following disk configurations can be deployed:

Two 600 GB 10,000 rpm SAS drives – This configuration meets the storage capacity

requirement and provides roughly IOPS of storage performance.

Eight 146 GB 15,000 rpm SAS drives – This configuration meets the storage capacity requirement and provides roughly 1,400 IOPS of storage performance.

In both situations, the storage capacity requirement is met. However, the storage performance offered by each disk configuration is very different.

To understand the storage capacity and performance requirements of a VMware virtualized workload,

it is important to understand what applications and workloads are running inside the virtual machines. Twelve virtual machines running Microsoft® Windows® 2008 do not generate a significant amount of storage performance. However, if those 12 virtual machines are also running Microsoft SQL Server,

the storage performance requirement can be very high.

The SAN Volume Controller and the Storwize family systems enable volumes to use a large number of disk spindles by striping data across all the spindles contained in a storage pool. A storage pool can

contain a single managed disk (MDisk), or multiple MDisks. The IBM Redbooks guide titled, SAN

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 23

Page 25: VMware vSphere best practices for IBM SAN Volume Controller and

Volume Controller Best Practices and Performance Guidelines, provides performance best practices for configuring MDisks and storage pools. Refer to the guide at:

ibm.com/redbooks/redbooks.nsf/RedbookAbstracts/sg247521.html?OpenDocument

The heterogeneous storage workload created by VMware benefits from the volume striping performed by SAN Volume Controller and the Storwize family. It is still important to ensure that the storage

performance provided by the storage pool is satisfactory to the workload. However, as most VMware workloads are variable during peak demand, volume striping and the availability of more spindles enable more consolidation.

Volume and datastore sizing

With VMware vSphere 5.0, the maximum VMFS datastore size for a single extent was raised to 64 TB. That means, a single storage volume of 64 TB can be provisioned, mapped to a VMware ESXi host,

and formatted as a VMFS datastore. The support for large volumes has been made possible by eliminating the legacy SCSI-2 locking, which VMware used to maintain VMFS integrity. More information about that is available in the “VMware storage integrations” section.

The advantage of large volumes to VMware administrators is that it simplifies management. As long as a storage pool has the storage capacity and performance available to meet the requirements, the maximum VMFS datastore size of 64 TB can be used. However, operations such as IBM FlashCopy®,

Metro Mirror or Global Mirror, or volume mirroring are impacted by volume size. For example, the initial replication of a 64 TB volume can be a significant amount of time.

VMware vSphere 5.5 offers the ability to create datastore clusters, which are a logical grouping of

VMFS datastores into a single management object. This means smaller and more manageable VMFS volumes can be grouped into a single management object for VMware administrators.

The IBM recommendation for volume and datastore sizing with VMware vSphere 5.5 and SAN Volume

Controller and the Storwize family is to use volumes sized between 1 TB to 10 TB, and group these together into the VMware datastore clusters.

Thin provisioning with VMware

Thin provisioning is included on the SAN Volume Controller and Storwize family systems and can be seamlessly implemented with VMware. Thin volumes can be created and provisioned to VMware, or volumes can be nondisruptively converted to thin volumes with volume mirroring. VMware vSphere

also includes the ability to create thin-provisioned virtual machine disks, or to convert virtual machine disks during a Storage vMotion operation.

Normally, to realize the maximum benefit of thin provisioning, thin virtual disk files must be placed on

thinly provisioned storage volumes. This ensures that only the space used by virtual machines is consumed in the VMFS datastore and on the storage volume. This makes for a complicated configuration as capacity can be over-provisioned and must be monitored at both the VMFS datastore

and storage pool levels.

SAN Volume Controller and the Storwize family systems simplify thin provisioning through a feature called Zero Detect. Regardless of what VMware virtual machine disk type is being used (zeroed thick,

eager-zeroed thick, or thin), the SAN Volume Controller or Storwize family system detects the zero

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 24

Page 26: VMware vSphere best practices for IBM SAN Volume Controller and

blocks and does not allocate space for them. This means that VMware eager-zeroed thick virtual disks consume as much space as a VMware thin provisioned virtual disk.

The IBM recommendation is to implement and monitor thin provisioning on the storage system. The zeroed thick or eager-zeroed thick disk types should be deployed for virtual machines.

Using Easy Tier with VMware

SAN Volume Controller and the Storwize family have a classification of a storage pool, called a hybrid pool. A storage pool is considered as a hybrid pool when it contains a mixture of solid-state drives (SSDs) and standard spinning disks. The main advantage of a hybrid storage pool is that IBM Easy

Tier can be enabled.

Easy Tier enables effective use of SSD storage by monitoring the I/O characteristics of a virtualized volume or storage pool and migrating the frequently accessed portions of data to the higher-

performing SSDs. Easy Tier reduces the overall cost and investment of SSDs by effectively managing their usage.

Easy Tier works seamless with the applications accessing the storage, including VMware and the

virtualized workloads running on it. There is also no special necessity for an application to benefit from Easy Tier.

Using IBM Real-Time Compression with VMware

IBM Real-time Compression is seamlessly implemented on SAN Volume Controller and the Storwize family by providing a new volume type, compressed volume. Real-Time Compression uses the Random Access Compression Engine (RACE) technology, previously available in the IBM Real-time

Compression Appliance™, to compress incoming host writes before they are committed to disk. This results in a significant reduction of data that must be stored.

Real-time Compression is enabled at the storage volume level. So, in a vSphere 5.5 environment, all virtual machines and data stored within a VMFS datastore that resides on a real-time compressed volume is compressed. This includes operating system files, installed applications, and any data.

Lab testing and real-world measurements have shown that Real-Time Compression reduces the storage capacity consumed by VMware virtual machines by up to 70%. Real-Time Compression works

seamlessly with the VMware ESXi hosts, and therefore, no special VMware configurations or practices are required. For more information about Real-time Compression and VMware, refer to the Using the IBM Storwize V7000 Real-time Compression feature with VMware vSphere 5.0 white paper at:

ibm.com/partnerworld/wps/servlet/ContentHandler/stg_ast_sto_wp_v7000_real_time_compression Real-Time Compression and Easy Tier can be used together, allowing for higher performance and

higher efficiency at the same time. To use this configuration, the IBM SAN Volume Controller or Storwize family must be running the software version 7.1.0 or later.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 25

Page 27: VMware vSphere best practices for IBM SAN Volume Controller and

VMware storage integrations

VMware has always provided an ecosystem in which partners can integrate their products and provide additional functionality for the virtualized infrastructure. Storage partners have the opportunity to integrate

with several VMware application programming interfaces (APIs) to provide additional functionality, enhanced performance, and integrated management. The following sections outline some of these key integration points.

VMware vSphere Storage APIs for Array Integration

The vSphere Storage APIs for Array Integration (VAAIs) are a set of APIs available to VMware storage partners which when leveraged, allow certain VMware functions to be delegated to the storage array,

enhancing performance and reducing load on servers and storage area networks (SANs). Figure 22 provides a high-level overview of the VAAI functions.

Figure 22: vStorage APIs for Array Integration relationship to VMware functions

The implementation of VAAI in vSphere 4.1 introduced three primitives: hardware-accelerated block zero, hardware-assisted locking, and hardware-accelerated full copy. The VMware vSphere 4.1 implementation of VAAI does not use standard SCSI commands to provide instructions to the storage

array. So, a device driver is required to be installed on the vSphere 4.1 ESX/ESXi hosts. You can find more details about installing the IBM Storage Device Driver for VMware VAAI at:

http://delivery04.dhe.ibm.com/sar/CMA/SDA/02l6n/1/IBM_Storage_DD_for_VMware_VAAI_1.2.0_IG.p

df

The VAAI implementation on vSphere 5.0 uses standard SCSI commands. So, a device driver is no longer required to be installed on the vSphere 5.0 and later ESXi hosts.

With both vSphere 4.1 and vSphere 5.5 implementations of VAAI, the SAN Volume Controller or the Storwize family system must be running the software version 6.2.x or later. The VAAI primitives can be easily enabled and disabled with the following methods:

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 26

Page 28: VMware vSphere best practices for IBM SAN Volume Controller and

Controlling VAAI through the .NET vSphere client

The VAAI primitives for hardware-accelerated block zero and full copy, namely

DataMover.HardwareAcceleratedInit, and DataMover.HardwareAcceleratedMove respectively, can be enabled and disabled in the vSphere host advanced settings (as shown in Figure 23). The hardware-assisted locking primitive can be controlled by changing the

VMFS3.HardwareAcceleratedLocking setting as shown in Figure 24.

Figure 23: Controlling hardware-accelerated block zero or full copy

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 27

Page 29: VMware vSphere best practices for IBM SAN Volume Controller and

Figure 24: Controlling hardware-assisted locking

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 28

Page 30: VMware vSphere best practices for IBM SAN Volume Controller and

Controlling VAAI through the vSphere Web Client

The VAAI primitives can also be controlled through the vSphere Web Client (as shown in

Figure 25)..

Figure 25: Controlling VAAI through the vSphere Web Client

Controlling VAAI through command line

The VAAI primitives can also be controlled through the command-line interface. The commands differ between ESXi 4.1 and ESX 5.0 and later.

ESXi 5.0 and later

To view the status of a setting, use the esxcli command with the list option.

~# esxcli system settings advanced list –o

/DataMover/HardwareAcceleratedMove

Path: /DataMover/HardwareAcceleratedMove

Type: integer

Int Value: 0

Default Int Value: 1

Min Value: 0

Max Value: 1

String Value:

Default String Value:

Valid Characters:

Description: Enable hardware accelerated VMFS data movement (requires

compliant hardware)

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 29

Page 31: VMware vSphere best practices for IBM SAN Volume Controller and

To change a setting, use the excli command with the set option.

~# esxcli system settings advanced set –o

“/DataMover/HardwareAcceleratedMove” –i 1

~# esxcli system settings advanced set –o

“/DataMover/HardwareAcceleratedMove” –i 0

ESX and ESXi 4.1

To view the status of a setting, use the esxcfg-advcfg command with the –g option:

~ # esxcfg-advcfg -g /DataMover/HardwareAcceleratedMove

Value of HardwareAcceleratedMove is 1

To change a setting, use the esxcfg-advcfg command with the –s option:

~ # esxcfg-advcfg -s 0 /DataMover/HardwareAcceleratedMove

Value of HardwareAcceleratedMove is 0

~ # esxcfg-advcfg -s 1 /DataMover/HardwareAcceleratedMove

Value of HardwareAcceleratedMove is 1

IBM Storage Management Console for VMware vCenter

IBM has taken advantage of the open plug-in architecture of VMware vCenter Server to develop the IBM Storage Management Console for VMware vCenter Server. The IBM Storage Management Console is a software plug-in that integrates into VMware vCenter and enables management of the

supported IBM storage systems including:

IBM System Storage SAN Volume Controller IBM XIV® Storage System

IBM Storwize V7000 IBM Storwize V7000 Unified IBM Storwize V5000

IBM Storwize V3700 IBM Flex System V7000 Storage Node IBM Scale Out Network Attached Storage (SONAS)

When the IBM Storage Management Console for VMware is installed, it runs as a Microsoft Windows Server service on the vCenter Server. When a vSphere client connects to the vCenter Server, the running service is detected and the features provided by the Storage Management

Console are enabled for the client.

Features of the IBM Storage Management Console include:

Integration of the IBM storage management controls into the VMware vSphere graphical

user interface (GUI) with the addition of an IBM storage resource management tool and a dedicated IBM storage management tab

Full management of the storage volumes including: volume creation, deletion, resizing,

renaming, mapping, unmapping, and migration between storage pools Detailed storage reporting such as capacity usage, FlashCopy or snapshot details, and

replication status

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 30

Page 32: VMware vSphere best practices for IBM SAN Volume Controller and

The graphic in Figure 26 shows the relationships and interaction between the IBM plug-in, VMware vCenter and vSphere, and the IBM storage system.

Figure 26: Relationships and interaction between components

Installation and configuration

You can download the IBM Storage Management Console for VMware vCenter by accessing the IBM Fix Central website (at ibm.com/support/fixcentral/) and searching for updates available for

any of the supported IBM storage systems.

Download the installation package that is appropriate for the architecture of the vCenter server.

On x86 architectures – IBM_Storage_Management_Console_for_VMware_vCenter-3.0.0-

x86_1338.exe

On x64 architectures – IBM_Storage_Management_Console_for_VMware_vCenter-3.0.0-

x64_1338.exe

An installation and administrative guide is included in the software download.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 31

Page 33: VMware vSphere best practices for IBM SAN Volume Controller and

VMware vSphere APIs for Data Protection

The vSphere APIs for Data Protection (VADP) is the enabling technology for performing backups of

VMware vSphere environments. IBM Tivoli® Storage Manager for Virtual Environments integrates with VADP to perform the following backups:

Full, differential, and incremental full virtual machine (image) backup and restore.

File-level backup of virtual machines running supported Microsoft Windows and Linux® operating systems.

Data consistent backups by using the Microsoft Volume Shadow Copy Service (VSS) for

virtual machines running supported Microsoft Windows operating systems.

Tivoli Storage Manager for Virtual Environments can centrally back up virtual machines across multiple vSphere hosts without the requirement of a backup agent within the virtual machines by using

VADP. The backup operation is off loaded from the vSphere host, allowing the host to run more virtual machines. Figure 27 shows an example of a Tivoli Storage Manager for Virtual Environments architecture.

Figure 27: Tivoli Storage Manager for Virtual Environments architecture

Tivoli Storage Manager for Virtual Environments includes a GUI that can be used from the VMware vSphere Client. The Data Protection for VMware vCenter plug-in is installed as a vCenter Server extension in the Solutions and Applications panel of the vCenter Server system.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 32

Page 34: VMware vSphere best practices for IBM SAN Volume Controller and

The Data Protection for VMware vCenter plug-in can be used to complete the following tasks:

Create and initiate or schedule a backup of virtual machines to a Tivoli Storage Manager

server. Restore files or virtual machines form a Tivoli Storage Manager server to the vSphere host

or datastore.

View reports of backup, restore, and configuration activities. Figure 28 shows the Getting Started page, which is displayed when the plug-in is first opened.

Figure 28: The Getting Started page of Tivoli Data Protection for VMware vCenter plug-in

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 33

Page 35: VMware vSphere best practices for IBM SAN Volume Controller and

Summary The IBM System Storage SAN Volume Controller and Storwize family systems provide scalability and performance for VMware vSphere environments through the native characteristics of the storage systems

and also through VMware API integrations.

This paper outlined configuration best practices for using SAN Volume Controller and the Storwize family, and also included information on efficiency features such as thin provisioning, Easy Tier, and Real-Time

Compression that can be seamlessly deployed within a VMware environment.

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 34

Page 36: VMware vSphere best practices for IBM SAN Volume Controller and

Resources

The following websites provide useful references to supplement the information contained in this paper:

IBM Systems on PartnerWorld ibm.com/partnerworld/systems

IBM Redbooks ibm.com/redbooks

IBM System Storage Interoperation Center (SSIC) ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?start_over=yes

IBM Storwize V7000

ibm.com/storage/storwizev7000

IBM System Storage SAN Volume Controller

ibm.com/systems/storage/software/virtualization/svc/index.html

IBM TechDocs Library

ibm.com/support/techdocs/atsmastr.nsf/Web/TechDocs

VMware vSphere 5 Documentation Center

pubs.vmware.com/vsphere-55/index.jsp

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 35

Page 37: VMware vSphere best practices for IBM SAN Volume Controller and

Trademarks and special notices © Copyright IBM Corporation 2013.

References in this document to IBM products or services do not imply that IBM intends to make them

available in every country.

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked

terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A

current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml.

Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the

United States, other countries, or both.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.

Other company, product, or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance

characteristics may vary by customer.

Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of

such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims

related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.

All statements regarding IBM future direction and intent are subject to change or withdrawal without notice,

and represent goals and objectives only. Contact your local IBM office or IBM authorized reseller for the full text of the specific Statement of Direction.

Some information addresses anticipated future capabilities. Such information is not intended as a definitive

statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort

to help with our customers' future planning.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon

considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 36

Page 38: VMware vSphere best practices for IBM SAN Volume Controller and

VMware vSphere best practices for IBM SAN Volume Controller and IBM Storwize family 37

storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here.

Photographs shown are of engineering prototypes. Changes may be incorporated in production models.

Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of

the materials for this IBM product and use of those websites is at your own risk.