vmware vcloud director with hds compute blade 2k & vsp

41
1 VMware® vCloud™ Director with the Hitachi Compute Blade 2000 and the Hitachi Virtual Storage Platform Reference Architecture Henry Chu August 2011 Month Yea

Upload: arnab-mallick

Post on 30-Dec-2015

52 views

Category:

Documents


0 download

DESCRIPTION

Vmware Vcloud Director With HDS Compute Blade 2K & VSP

TRANSCRIPT

Page 1: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

1

VMware® vCloud™ Director with the Hitachi Compute Blade 2000 and the Hitachi Virtual Storage Platform Reference Architecture

Henry Chu

August 2011

Month Yea

Page 2: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

1

Feedback Hitachi Data Systems welcomes your feedback. Please share your thoughts by sending an email message to [email protected]. Be sure to include the title of this white paper in your email message.

Page 3: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

2

Table of Contents Solution Overview ..................................................................................................................... 4

Solution Components ............................................................................................................... 5

Hitachi Compute Blade 2000.................................................................................................... 5

Hitachi Virtual Storage Platform ............................................................................................... 5

Hitachi Adaptable Modular Storage 2000 Family ..................................................................... 6

VMware vCloud Director 1.0 .................................................................................................... 7

VMware vSphere 4.1 ................................................................................................................ 7

Solution Design ......................................................................................................................... 7

Compute Infrastructure ............................................................................................................. 7

Network Infrastructure .............................................................................................................. 14

Storage Infrastructure ............................................................................................................... 18

Performance and Scalability .................................................................................................... 23

Compute Performance ............................................................................................................. 26

Storage Performance ............................................................................................................... 29

Application Performance .......................................................................................................... 34

Conclusion ................................................................................................................................. 37

Appendix — Terminology .......................................................................................................... 38

Page 4: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

3

VMware® vCloud™ Director with the Hitachi Compute Blade 2000 and the Hitachi Virtual Storage Platform Reference Architecture

Using a VMware vSphere infrastructure can improve a data center’s resource agility and utilization. However, some infrastructure silos may not use resources efficiently. Each silo can have a wide range of utilization, from very little to near full capacity. Manual managing of resources to handle spikes in resource utilization can be time consuming and complex. A silo model often introduces inefficient utilization of resources and does not meet increasing business demand.

A new approach leverages a shared services infrastructure to aggregate resources from all silos, to serve all organizations. However, the solution must have built-in multi-tenancy, high availability, built-in quality of service control, and ease of provisioning

With VMware vCloud Director, you can meet those challenges by providing IT services through an infrastructure-as-a-service (IaaS) cloud platform. Sharing a common VMware vSphere infrastructure with vCloud Director provides dynamic resource pools with on-demand capacity using a self-service model.

VMware vCloud Director introduces the concept of provider virtual data centers (vDC). This ensures clear separation of pooled resources. Each organization consumes these pooled resources through individual self-service ports. vCloud Director builds on top of vSphere a needed layer of abstraction to provide cloud mobility and multi-tenancy.

The underlying infrastructure needs to be highly available and elastic. To ensure tenants have a seamless experience with resources on demand, the storage infrastructure must carry these same characteristics. The Hitachi Virtual Storage Platform provides an ideal storage platform for VMware vCloud Director. The storage virtualization capability of the Virtual Storage Platform allows central management of multiple storage systems, including those that lack the features of the Virtual Storage Platform.

All the features and capabilities of the Hitachi Virtual Storage Platform can apply to a heterogeneous storage environment to transform it a homogenous storage environment. Hitachi Dynamic Provisioning can create dynamic storage resource pools with on-demand capacity.

Using VMware vStorage APIs for Array Integration integrates storage resources into VMware vSphere to improve operational efficiency through hardware acceleration.

This reference architecture is for storage administrators, vCloud administrators, vSphere administrators, and application administrators charged with managing large, dynamic environments. It assumes familiarity with storage area network (SAN)-based storage systems, VMware vSphere, and common IT storage practices.

Page 5: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

4

Solution Overview This reference architecture utilizes the following:

The Hitachi Compute Blade 2000 (CB 2000) — An enterprise-class server platform

The Hitachi Virtual Storage Platform (VSP) — A high performance and highly scalable storage solution

The Hitachi Adaptable Modular Storage 2100 (AMS 2100) — A high performance low cost, true active-active midrange storage solution

The Brocade 5340 Fibre Channel switch — Provides SAN connectivity to the data center’s network.

The VMware vSphere 4.1 — Virtualization technology providing the foundation for cloud computing.

The VMware vCloud Director 1.0 — Cloud platform built on top of a VMware vSphere infrastructure to enable infrastructure as a service (IaaS) cloud computing

Figure 1 is a diagram of the reference architecture, showing network and Fibre Channel redundant connectivity.

Figure 1

Page 6: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

5

Solution Components The following are descriptions of the components used in this reference architecture.

Hitachi Compute Blade 2000 The Hitachi Compute Blade 2000 is an enterprise-class blade server platform. It features the following:

A balanced system architecture that eliminates bottlenecks in performance and throughput

Embedded logical partition virtualization

Configuration flexibility

Eco-friendly power-saving capabilities

Fast server failure recovery using a N+1 cold standby design that allows replacing failed servers within minutes

Hitachi embeds logical partitioning virtualization in the firmware of the Hitachi Compute Blade 2000 server blades. This proven, mainframe-class technology combines Hitachi’s logical partitioning expertise with Intel VT technologies to improve performance, reliability, and security. Embedded logical partition virtualization does not degrade application performance and does not require the purchase and installation of additional components.

Hitachi Virtual Storage Platform

The Hitachi Virtual Storage Platform is a 3D scaling storage platform. With the unique ability to scale up, scale out, and scale deep at the same time in a single storage system, the Virtual Storage Platform flexibly adapts for performance, capacity, connectivity, and virtualization.

Scale Up — Increase performance, capacity, and connectivity by adding cache, processors, connections, and disks to the base system.

Scale Out — Combine multiple chassis into a single logical system with shared resources.

Scale Deep — Extend the advanced functions of the Virtual Storage Platform to external multivendor storage.

The Hitachi Virtual Storage Platform has features that complement the VMware vCloud Director.

Pooling with VMware DRS and Hitachi Dynamic Provisioning Hitachi Dynamic Provisioning, a feature of the Hitachi Virtual Storage Platform, aggregates storage resources into a pool much as VMware DRS aggregates compute resources into a pool. Provider virtual data centers in VMware vCloud Director is directly associated with a VMware DRS pool for compute resources and Hitachi Dynamic Provisioning for pooled storage resources. The pooling capabilities of VMware DRS and Hitachi Dynamic Provisioning provide a foundation for dynamic compute and storage resource allocation to provider virtual data centers.

With Hitachi Dynamic Provisioning, you can increase or decrease storage capacity and performance by adding or removing parity groups into a dynamic provisioning pool. Hitachi Dynamic Provisioning also allows oversubscription of capacity and provides wide-striping across all disks in the pool.

Page 7: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

6

Data Agility with VMware Storage vMotion and Hitachi Tiered Storage Manager Hitachi Tiered Storage Manager migrates logical volumes (LDEVs) non-disruptively to internal and external storage on a Hitachi Virtual Storage Platform. Apply Hitachi Tiered Storage Manager on LDEVs at the storage system level. Migrations with Hitachi Tiered Storage Manager are completely transparent to the virtual machine and ESX hosts.

The ability to non-disruptively migrate LDEVs allows data mobility for a provider virtual data center. This provides a tier storage migration solution for provider virtual data centers. Service levels can be dynamically upgraded or downgraded based on business needs.

VMware’s Storage vMotion migrates virtual machine disks non-disruptively to VMFS datastores on ESX hosts. Apply VMware Storage vMotion on a virtual machine at the VMFS datastore level.

Storage Hardware Acceleration for Common vSphere Operations VMware vStorage APIs for Array Integration (VAAI) runs key data operations on the Hitachi Virtual Storage Platform rather than the ESX server layer. This reduces resource utilization and potential bottlenecks on physical servers with a more consistent server performance and a higher virtual machine density.

The Hitachi Virtual Storage Platform supports all VMware vSphere 4.1 VAAI API primitives. The following API primitives give direct benefits in a vCloud environment:

Full copy — Enables the Hitachi Virtual Storage Platform to make full copies of data within the storage system without having the ESX host read and write the data. The constant read and write operation is offloaded to the Hitachi Virtual Storage Platform. This results in a substantial reduction in provisioning times when cloning virtual machines in vCloud Director.

Hardware-assisted locking — Enables the ESX host to offload locking operations to the Hitachi Virtual Storage Platform. Offloading locking operations provides a highly scalable storage platform for vCloud Director when a high number of virtual machines might share common storage resources. This also provides a granular LUN locking method to allow locking at the logical block address level without the use of SCSI reservations or the need to lock the entire LUN from other hosts.

For more information, see the Hitachi Virtual Storage Platform on the Hitachi Data Systems website.

Hitachi Adaptable Modular Storage 2000 Family The Adaptable Modular Storage 2000 family systems are midrange storage systems with the Hitachi Dynamic Load Balancing Controller that provide integrated, automated hardware-based front to back end I/O load balancing, thus eliminating many complex and time-consuming tasks that storage administrators typically face. This ensures I/O traffic to back-end disk devices is dynamically managed, balanced and shared equally across both controllers. The point-to-point back end design virtually eliminates I/O transfer delays and contention associated with Fibre Channel arbitration and provides significantly higher bandwidth and I/O concurrency.

The active-active Fibre Channel ports mean the user does not need to be concerned with controller ownership. I/O is passed to the managing controller through cross-path communication. Any path can be used as a normal path. The Hitachi Dynamic Load Balancing controllers assist in balancing microprocessor load across the storage systems. If a microprocessor becomes excessively busy, the LU management automatically switches to help balance the microprocessor load. For more information about the Adaptable Modular Storage 2000 family, see the Hitachi Data Systems Adaptable Modular Storage family web site.

Page 8: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

7

VMware vCloud Director 1.0 VMware vCloud Director 1.0 pools your data center resources to deliver a cloud computing infrastructure. This approach uses the efficient pooling of on-demand, self-managed virtual infrastructure to provide resources consumable as a service. This reference architecture focuses on using VMware vCloud Director strictly at the infrastructure as a service (IaaS) layer.

VMware vCloud Director combined with VMware vSphere extends your virtual infrastructure capabilities to assist the delivery of infrastructure service using cloud computing.

Find out more about VMware vCloud Director on the VMware website.

VMware vSphere 4.1 VMware vSphere 4.1 is a highly efficient virtualization platform that provides a robust, scalable, and reliable data center infrastructure. VMware vSphere has features like VMware Distributed Resource Scheduler (DRS). VMware High Availability and VMware Fault Tolerance provide an easy to manage platform.

VMware vSphere 4.1 supports vApp, a logical grouping of one or more VMs in open virtualization format. You can encapsulate a multi-tier application with service levels and policies in a vApp.

Use the round robin multipathing policy of the VMware ESX 4 hypervisor with the symmetric active-active controllers’ dynamic load balancing of VMware DRS distributes load across multiple host bus adapters (HBAs) and multiple storage ports.

For more information, see the VMware vSphere website.

Solution Design The following describes the compute, network, and storage infrastructures used for this reference architecture.

Compute Infrastructure This is the compute infrastructure used in this reference architecture:

VMware vCloud Director 1.0

VMware vSphere 4.1

Hitachi Compute Blade 2000

VMware vCloud Director 1.0 The VMware vCloud Director creates abstract resources from the VMware vSphere infrastructure. Each abstract resource is presented for consumption by deploying its own virtual machine. The complete set of abstract resources is the resource stack.

Figure 2 shows the VMware vCloud Director resource stack used in this reference architecture.

Page 9: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

8

Figure 2

There are provider and organization virtual data centers.

End users access an organization’s virtual data center as a self-service portal. The user log on through the VMware vCloud Director Web interface allows access to allocated compute, network, and storage resources.

An organization virtual data center can allocate compute resource three ways:

Allocation Pool — A percentage of the allocated resources are committed to an organization virtual data center. The system administrator controls the over commitment of the capacity.

Pay-As-You-Go — Allocated resources are committed to an organization virtual data center only when creating a vApp.

Reservation Pool — All the allocated resources are committed to the organization virtual data center. Users can control the over commitment of the capacity at anytime.

This reference architecture used the allocation pool model that has the best control of resource allocation. Resources are committed to an organization in an over committed or non over committed way. Table 1 shows the resource distribution for the allocation pool, which did not allow for the over commitment of resources.

Page 10: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

9

Table 1. Resource Allocation at the Organization Virtual Data Center Level for the Non-over Committed Pools

Gold Level Organization Silver Level Organization Bronze Level Organization CPU 100% of 81GHz 100% of 63GHz 100% of 51GHz Memory 100% of 138GB 100% of 132GB 100% of 123GB An organization virtual data center can be allowed to access a large CPU and memory resource pool that is contained in its provider virtual data center resource pool. Table 2 shows the resource distribution for allocation pools which allow for the over commitment of resources.

Table 2. Resource Allocation at the Organization Virtual Data Center Level for Over-Committed Pools

Gold Level Organization Silver Level Organization Bronze Level Organization CPU 39% of 200GHz 31% of 200GHz 25% of 200GHz Memory 34% of 402GB 33% of 402GB 31% of 402GB The reserved resource percentage can be used to guarantee a minimum amount of resource that is always available to the organization virtual data center. Any remaining resource is available to the organization virtual data center only when it is available in the provider data center. This model can be used to allow resource over commitment.

The compute resources for each organization can be dynamically changed. As each organization’s need for computing resource changes, CPU and memory resources can be increased or decreased individually to meet those demands from the provider data center.

Each organization virtual data center is associated with a provider data center. These provider data center can be used to categorize certain levels of service level agreements or even application types. Figure 2 shows the three provider data centers center used in this reference architecture: Gold, Silver, and Bronze. Each has different levels of guaranteed levels CPU and memory resources.

Each provider data center is associated with a resource pool within a VMware vSphere DRS cluster. These resource pools are not created by VMware vCloud Director, but manually with VMware vCenter. These resource pools are used to logically group different provider virtual data centers.

Recommended practice is to configure only one resource pool for each service levels. This differentiates each service level. For example, configure one resource pool for each Gold, Silver, and Bronze level provider virtual datacenter. Set-up and create all sub-level resource pools using vCloud Director.

VMware vSphere 4.1 The VMware vSphere 4.1 infrastructure provides the foundation from where vCloud Director defines resources. A robust and highly available infrastructure is necessary to ensure the best performance for VMware vCloud Director.

The VMware vSphere infrastructure was configured into two parts:

Management Cluster — This provides all the applications and services needed for VMware vCloud Director.

Resource Cluster — This provides the resources for any virtual machines deployed through VMware vCloud Director as a dedicated cluster with raw compute capacity for tenant consumption.

Page 11: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

10

This ensures a clear separation of resource for management operations and cloud resources.

Management Cluster Figure 3 shows the management cluster used to meet the infrastructure needs to run VMware vCenter, Oracle Database, and VMware vCloud Director.

Figure 3

The management cluster contains the following components and functions:

vCenter Server 4.1 — Overall management of the virtualized infrastructure

Oracle 11G — Red Hat Enterprise Linux 5 virtual machines running Oracle 11G. This serves database instances for vCenter Server 4.1 and vCloud Director 1.0

vCloud Director 1.0 Cell — A single VMware vCloud Director instance providing the core component of vCloud Director. This creates a new layer of abstraction to facilitate communication between vCenter server and ESX server to accept end clients requests through a built-in web portal and vCloud API calls.

vShield Manager — Management component of vShield Suite. This is responsible for vShield service agent (vShield Edge) installation, firewall rules configuration, and management.

VMware makes the following general recommendation for the number of cells needed:

𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑉𝑀𝑤𝑎𝑟𝑒 𝑣𝐶𝑙𝑜𝑢𝑑 𝐶𝑒𝑙𝑙𝑠 = 𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑉𝑀𝑤𝑎𝑟𝑒 𝑣𝐶𝑒𝑛𝑡𝑒𝑟 𝑖𝑛𝑠𝑡𝑎𝑛𝑐𝑒𝑠 + 1

The additional cell shown in the formula is used for any of the following:

A spare cell for failover

When a cell needs to be taken offline for maintenance

In addition, a single vCenter instance should be mapped to a single vCloud cell. This helps ensure resource consumption is load balanced between cells.

For more information on vCloud Director’s best practices, see VMware vCloud Director 1.0 Performance1.0 Performance and Best Practices.

Page 12: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

11

Table 3 shows the virtual machine’s configuration for each service. The VMware vCenter and VMware vCloud Director are kept on separate virtual machines from their database applications.

Table 3. Virtual Machine Configuration for Each Service in the Management Cluster

Application OS CPU Memory Virtual Disk VMware vCenter 4.1 Microsoft Windows

Server 2008 R2 Enterprise

2 vCPUs 4096MB 100GB Eagerzeroedthick (OS)

Oracle Database 11g (vC DB)

Microsoft Windows 2008 R2 Enterprise

4 vCPUs 8192MB 40GB Eagerzeroedthick (OS), 256GB Eagerzeroedthick (DB)

VMware vCloud Director 1.0

Red Hat Enterprise Linux 5.5

2 vCPUs 4096MB 40GB Eagerzeroedthick (OS)

Oracle Database 11g (vCD DB)

Microsoft Windows 2008 R2 Enterprise

4 vCPUs 8192MB 40GB Eagerzeroedthick (OS), 100GB Eagerzeroedthick (DB)

The virtual machines in the management cluster are load distributed by VMware’s Dynamic Resource Scheduler (DRS) between two ESX 4.1 hypervisor hosts. When ESX hosts are configured in a VMware DRS cluster, the resources are aggregated into a pool allowing their use as a single entity.

A virtual machine can use resources from any host in the cluster, rather than being tied to a single host. VMware DRS manages these resources as a pool to place virtual machines on a host at power-on automatically and then monitors resource allocation. VMware DRS uses VMware vMotion to move virtual machines from one host to another when it detects a performance benefit or based on optimization decisions.

High Availability cluster from VMware provides redundancy for these virtual machines. When an ESX host fails or goes offline, the virtual machines on the failed host quickly restart on an available ESX host. This reference architecture allows for a single host failure. More redundancy can be achieved by adding additional ESX hosts into the management cluster.

Resource Cluster Figure 4 shows resource cluster configuration, which contains the user workload virtual machines.

Figure 4

Page 13: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

12

Three resource pools (Gold, Silver, and Bronze) were created within a VMware DRS cluster pool. Each resource pool is mapped to single provider virtual data center. This single 1:1 mapping of the resource pool to a provider virtual data center helps facilitate the separation of performance characteristics of each provider virtual data center.

The virtual machines in the resource cluster are load distributed by VMware’s DRS between 6 ESX hosts. Because all the ESX hosts’ resources in a VMware DRS cluster are dynamically aggregated, add or remove CPU and memory resources by adding or removing ESX hosts into the cluster. The dynamic resource is reflected directly to the resource pool where the provider virtual data center is mapped.

The maximum supported configuration of VMware vCenter is 32 ESX hosts in a DRS cluster. Once reaching the maximum scale-up limit, create an additional VMware DRS cluster to scale-out for an additional 32 ESX hosts. A single VMware vCenter instance can support a maximum of 1000 ESX hosts.

VMware High Availability (HA) provides redundancy for the resource cluster and the user workload virtual machines. When an ESX host fails or goes offline, the virtual machines on the failed host quickly restart on an available ESX host.

Hitachi Compute Blade 2000 The Hitachi Compute Blade 2000 was used for the VMware vSphere 4.1 infrastructure. Table 4 shows the hardware configuration, with the 8 server blades shown in Figure 5.

Table 4. Hitachi Compute Blade 2000 Configuration

Model CPU RAM 8 × X55A2 2 × 6 core Intel Xeon 5670 @ 2.93GHz

12MB Cache 6.40 GT/s system bus

72GB DDR3 Registered DIMM

Page 14: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

13

Figure 5

Page 15: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

14

Table 5 shows the resource distribution of the Hitachi Compute Blade 2000. There is room for growth as the VMware vCloud and VMware vSphere infrastructures grows.

Table 5. Hitachi Compute Blade 2000 Resource Distribution

Management Cluster Resource Cluster

Purpose VMware vCenter 4.1 Oracle Database 11g VMware vCenter DB VMware vCloud Director 1.0 vShield Manager 4.1 ESX0, ESX1

User Workload Virtual Machines Gold Resource Pool Silver Resource Pool Bronze Resource Pool

ESX2, ESX3, ESX4, ESx5, ESX6, ESX7

Server Blades 2 6

CPU 35.16GHz 210.96GHz

Memory 144GB 432GB

Network Infrastructure This is the network infrastructure used in this reference architecture. The Hitachi Compute Blade 2000 used in this solution contains the network hardware found in Table 6.

Table 6. Hitachi Compute Blade 2000 Network Hardware

NICs Switch Module 0 Switch Module 1 Switch Module 2 Switch Module 3 2 × onboard Intel 82576 Gigabit Ethernet (SERDES) ports 1 × mezzanine quad (SERDES) port Intel 82576 Gigabit Ethernet

1Gb LAN switch 20 × internal 1000Base-SERDES ports 4 × external 1000Base-T/100Base-TX/10-Base-T ports

1Gb/10Gb LAN switch 20 × internal 1000Base-SERDES ports 4 × external 1000Base-T/100Base-TX/10-Base-T ports 2 × 10GBase-LR/SR (XFP) ports

1Gb LAN switch 20 × internal 1000Base-SERDES ports 4 × external 1000Base-T/100Base-TX/10-Base-T ports

1Gb/10Gb LAN switch 20 × internal 1000Base-SERDES ports 4 × external 1000Base-T/100Base-TX/10-Base-T ports 2 × 10GBase-LR/SR (XFP) ports

In total, each server blade contains 6 gigabit NICs. Each NIC is connected through the chassis’s mid-plane to the internal ports of its corresponding switch module.

Note — The first numbered VMNIC (vmnic0) in ESX is Intel 825672LF-2. Do not use this in ESX for any network traffic.

Page 16: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

15

Management Cluster Each cluster contains a management network, VMKernel network, and virtual machine network. Each cluster’s network configuration is designed for specific purpose. Figure 6 shows the network configuration for the management cluster.

Figure 6

The management cluster uses a standard virtual switch for the service console, VMkernel, and virtual machine network. Two service console ports were used.

VMNIC1, dedicated to Service Console 1, provides the following:

Management traffic to VMware vCenter

The VMware High Availability heartbeat network to other ESX hosts on the same high availability cluster

VMware High Availability is a network-based failover trigger mechanism. To prevent an unwanted high availability failover or isolation response, use an additional VMware High Availability heartbeat network.

VMNIC2, dedicated to Service Console 2, is the redundant VMware High Availability heartbeat network.

Page 17: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

16

An additional benefit of using the Hitachi Compute Blade 2000 for VMware High Availability is that the connection between the VMNICs and switch modules are provided through SERDES ports directly connected to the chassis’s mid-plan. This eliminates the chances of an accidental cable pull or faulty cable connection. Since high availability heartbeats for each VMware High Availability network is kept on a single switch module and broadcast domain, this design reduces the chances of external network failures that could disrupt and the heartbeat network for VMware High Availability.

VMNIC2 also provides a dedicated VMkernel network for VMware vMotion traffic. This keeps the vMotion traffic between the two ESX hosts in the management cluster within the chassis.

VMNIC3 and VMNIC5, dedicated for virtual machine traffic, are teamed to load balance outbound traffic.

Resource Cluster The resource cluster uses the same hardware as the management cluster, but in a slightly different configuration designed specifically for the VMware vCloud user workload virtual machines. Figure 7 shows the network configuration for the resource cluster.

Figure 7

VMNIC configuration for the service console and VMkernel is identical to that of management cluster. However, the virtual machine network is configured in VMware’s vNetwork Distributed Switch (vDS). Centralized in a cluster level, vDS spans across multiple ESX hosts. In addition, it centralizes management of the ESX host network, as is recommended for VMware vCloud Director.

Page 18: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

17

Two VMware vDS switches from Tier 1 and Tier 2 are configured for virtual machine traffic, with two uplinks per ESX host. With 6 hosts in the resource cluster, there are 12 physical uplinks (VMNICS) for each VMware vDS.

The two uplinks for each ESX host per VMware vDS are teamed for outbound load balancing based on port ID algorithm. In addition, each port used by VMNICs on Switch Module 2 and Switch Module 3 is configured as 802.1Q VLAN trunks to allow multiple LAN segments to be passed for virtual machine traffic.

The different tiers of VMware vDS are determined by the uplink bandwidth for each corresponding chassis switch module. Figure 8 shows the external uplinks for each switch module.

Figure 8

Switch Module 0, connected to Management Module 0 and Management Module 1, is dedicated to the management network of ESX, VMware vCenter, individual server blades (using a web interface), and the management of the Hitachi Compute Blade 2000 chassis. The management network uses a 1Gbit Ethernet connection.

Switch module 1 is dedicated to VMware vMotion traffic. If the VMware vMotion traffic is between server blades on a single chassis, it is contained within the chassis and does not need an external uplink. However, if a dedicated VMware vMotion network within a physical data center is available, the 10Gbit uplink from this switch can be used for connectivity.

Switch module 2, dedicated to the Tier 2 inter-chassis virtual machine traffic, uses the 1Gbit Ethernet copper connections.

Switch module 3, dedicated to the Tier 1 inter-chassis virtual machine traffic, uses the 10Gbit Ethernet fiber connections.

Page 19: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

18

Storage Infrastructure This describes the storage infrastructure. The Hitachi Compute Blade 2000 (CB 2000) has the option of using dual-port mezzanine HBA cards with dual Fibre Channel switch modules or dual PCIe Fibre Channel HBA cards. This reference architecture used the Hitachi Five-EX family PCIe dual port 8Gbit/sec Fibre Channel cards.

Figure 9 shows the storage area network architecture.

Figure 9

Two Brocade 5340 switches were used. The first port on each HBA was connected to Switch1. The second port on each HBA is connected to Switch 2. Multipathing policy on ESX was set to round-robin.

Two ports, one from each cluster on the Hitachi Virtual Storage Platform, were used to connect to Switch 1. Another two ports, one from each cluster on the Hitachi Virtual Storage Platform, were used to connect to Switch 2. The cross connection allows increased redundancy so that, in case of a switch failure, the ESX host still has two paths, with one on each Hitachi Virtual Storage Platform cluster.

Similarly, two ports from each controller on the Hitachi Adaptable Modular Storage 2100 were used to connect to the external ports of the Hitachi Virtual Storage Platform.

Page 20: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

19

Table 7 shows the zone configuration for the management cluster.

Table 7. Management Cluster Zone Configuration for the vCD_MGMT_Cluster Storage Host Group

Host Host HBA Number

Zone Name Storage Port

ESX 0

HBA1_1 esx0_hba1_1_vsp_3B_4B 3B

4B

HBA1_2 esx0_hba1_2_vsp_5B_6B 5B

6B

ESX 1

HBA1_1 esx1_hba1_1_vsp_3B_4B 3B

4B

HBA1_2 esx1_hba1_2_vsp_5B_6B 5B

6B Single initiator zone configuration was used where each HBA port is zoned to 2 targets ports, with one from each Hitachi Virtual Storage Platform cluster.

With two initiator ports and four target ports, a total of 4 paths are available. Each ESX host uses these four paths in a round robin fashion.

To simplify management and to help keep consistent LUN to host mapping for the management cluster, a single storage host group was used across the four target ports (3B, 4B, 5B, 6B).

Table 8 shows the zone configuration for the resource cluster.

Page 21: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

20

Table 8. Resource Cluster Zone Configuration for the vCD_Resource_Cluster Storage Host Group

Host Host HBA Number

Zone Name Storage Port

ESX 2 HBA1_1 esx2_hba1_1_vsp_3B_4B

3B 4B

HBA1_2 esx2_hba1_2_vsp_5B_6B 5B 6B

ESX 3 HBA1_1 esx3_hba1_1_vsp_3B_4B

3B 4B

HBA1_2 esx3_hba1_2_vsp_5B_6B 5B 6B

ESX 4 HBA1_1 esx4_hba1_1_vsp_3B_4B

3B 4B

HBA1_2 esx4_hba1_2_vsp_5B_6B 5B 6B

ESX 5 HBA1_1 esx5_hba1_1_vsp_3B_4B

3B 4B

HBA1_2 esx5_hba1_2_vsp_5B_6B 5B 6B

ESX 6 HBA1_1 esx6_hba1_1_vsp_3B_4B

3B 4B

HBA1_2 esx6_hba1_2_vsp_5B_6B 5B 6B

ESX 7 HBA1_1 esx7_hba1_1_vsp_3B_4B

3B 4B

HBA1_2 esx7 _hba1_2_vsp_5B_6B 5B 6B

The resource cluster was configured in the same way as the management cluster. This results in four paths per ESX host. However, a different storage host group was used to help keep consistent LUN to host mapping for the resource cluster.

Table 9 shows the zone configuration for the external storage.

Table 9. External Storage Zone Configuration

Host External Port Zone Name Storage

Port

VSP 3D vsp_5D_ams2k_0A 0A 4D vsp_6D_ams2k_1A 1A

Page 22: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

21

Two ports on the Hitachi Virtual Storage Platform were configured as external ports. Similar to the management cluster and resource cluster storage area network configuration, one port was used from each Hitachi Virtual Storage Platform cluster. One port from each cluster was zoned to one port on a controller, creating a mapping of one Hitachi Virtual Storage Platform cluster to one Hitachi Adaptable Modular Storage controller mapping.

Storage Subsystem Configuration This describes the storage subsystem configuration. This solution uses the Hitachi Virtual Storage Platform as the primary storage. The Hitachi Virtual Storage Platform can make virtualized storage appear to the ESX hosts as if it is storage on the Hitachi Virtual Storage Platform. This means you will be able to use the advanced capabilities of the Hitachi Virtual Storage Platform, including VMware vStorage APIs for Array Integration, on the external storage whether or not the external storage supports the feature.

Figure 10 shows the storage subsystem’s disk configuration.

Figure 10

This reference architecture used dynamic provisioning volumes of 1.8TB without oversubscription. While the aggregate of the dynamic provisioning volume can be oversubscribed, ESX 4.1 only supports a maximum LUN size of 2TB.

Management HDP Pool Figure 10 shows the Management HDP Pool consisting of three parity groups using RAID 5 (3D+1P) 300GB 10K SAS drives in a single dynamic provisioning volume (Management DP-Vol) of 2TB. This dynamic provisioning volume is presented as a LUN to the hosts in the management cluster. This provides enough capacity and performance for most deployments.

Due to the dynamic nature of Hitachi Dynamic Provisioning, the increased capacity and performance requirements can be met easily by adding additional parity groups to the Management HDP Pool.

Page 23: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

22

Gold HDP Pool Figure 10 shows the Gold HDP Pool, which consists of six parity groups using RAID 5 (7D+1P) 146GB 15K SAS drives in three 1.8TB dynamic provisioning volumes (Gold DP-Vol) for a total of 5.4TB. These dynamic provisioning volumes are presented as three LUNs to the ESX hosts in the resource cluster.

The LUNs mapped from Gold DP-Vol1, Gold DP-Vol2, and Gold DP-Vol3 was purposed as tier 1 storage in the Gold Provider Virtual Data Center.

Silver HDP Pool Figure 10 shows the Silver HDP Pool, which consists of six parity groups of RAID 5 (7D+1P) 300GB 10K SAS drives in six 1.8TB dynamic provisioning volumes (Silver DP-Vol) for a total of 10.8TB. These dynamic provisioning volumes are presented as six LUNs to the ESX hosts in the resource cluster.

This Silver HDP Pool, while similar to the Gold HDP Pool, used a different type of drive. The drives in the Silver HDP Pool provide slightly less performance from the slower spindle rotation speeds but more capacity than drives in the Gold HDP Pool. Due to the higher available capacity, the Silver HDP Pool presents six dynamic provisioning volumes

The LUNs mapped from Silver DP-Vol1, Silver DP-Vol2, Silver DP-Vol3, Silver DP-Vol4, Silver DP-Vol5, and Silver DP-Vol6 was purposed as tier 2 storage in the Silver Provider Virtual Data Center.

Bronze HDP Pool Figure 10 shows the Bronze HDP Pool, which consists of six parity groups of RAID 5 (7D+1P) 300GB 15k SAS drives in six 1.8TB dynamic provisioning volumes (Bronze DP-Vol) for a total of 10.8TB. These dynamic provisioning volumes are presented as six LUNs to the ESX hosts in the resource cluster.

The parity groups were created using Hitachi Universal Volume Manager.

This reference architecture did not use Hitachi Dynamic Provisioning from the Hitachi Adaptable Modular Storage 2100. Instead, this reference architecture used Hitachi Dynamic Provisioning from the Hitachi Virtual Storage Platform.

The LUNs mapped from Bronze DP-Vol1, Bronze DP-Vol2, Bronze DP-Vol3, Bronze DP-Vol4, Bronze DP-Vol5, and Bronze DP-Vol6 was purposed as tier 3 storage in the Bronze Provider Virtual Data Center.

External Path Group Table 10 shows the configuration used when attaching external storage from the Hitachi Adaptable Modular Storage 2100 to the Hitachi Virtual Storage Platform.

Page 24: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

23

Table 10. External Path Group Configuration

Storage Model Number of Paths Cache Mode Inflow

Control Queue Depth

Hitachi Adaptable Modular Storage 2100 2 Disabled Disabled 32 This reference architecture disabled the cache mode for the ease of management. The following describes what results from disabling or enabling the cache mode:

Enabling cache mode utilizes cache on the Hitachi Virtual Storage Platform, signaling the host I/O operation has completed and asynchronously de-stages the data to the external storage system.

Disabling cache mode results in a synchronous I/O operation. The Hitachi Virtual Storage Platform signals the host that an I/O operation has completed only after the Hitachi Virtual Storage Platform has synchronously written the data to the external storage.

Enabling the cache mode improves response time. Disabling cache the cache mode provides easier management of external LDEVs, but at the cost of increased latency.

If improved latency is required, use the cache mode. When using cache mode, use a dedicated cache partition for the external LDEVs so that the internal LDEVs and external LDEVs does not share the same cache partitions. External storage may not have the same performance as the Hitachi Virtual Storage Platform, which could lead to slower de-staging of data to the external storage, which can increase cache usage and affect the internal LDEVs.

When using cache partitions, make careful sizing and resizing considerations as to the number of external volume and workload changes.

This reference architecture disabled the inflow control for the ease of management. Inflow control affects how the ESX host I/Os are accepted when an external volume is blocked or unavailable. The following describes what results from disabling or enabling the inflow control:

Disabling inflow control allows the writing the I/O to cache from the ESX host during a retry operation, even when a write operation to external storage is not possible. Writes continue until there is no more cache available. Cached data is de-staged when the external volume becomes available.

Enabling inflow control blocks the ESX host I/O to cache when the external volume is blocked or unavailable.

When disabling cache mode, disable inflow control because no cache is used. However, when enabling cache mode, enable inflow control.

Since cached mode is disabled in this solution, inflow control was disabled because no cache on the Hitachi Virtual Storage Platform is used.

Performance and Scalability This shows the performance and scalability of the VMware vCloud Director reference architecture.

For testing purposes, a mixed workload of email messages, Web pages, and OLTP was used. The workload was grouped into a tile-based system to measure application performance and scalability. Each tile contains mixed workloads that stress critical compute and storage resources. These workloads can represent a general purpose VMware vSphere environment.

Page 25: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

24

Each tile consists of the following virtual machines listed in Table 11.

Table 11. Virtual Machines for each Tile

Mail Server (Exchange 2007)

Olio Web Server

Olio Database Server

DVD Store 2 Database Server

DVD Store 2 Web Server

Standby

Quantity 1 1 1 1 3 1 CPU 4 vCPUs 4 vCPUs 2 vCPUs 4 vCPUs 2 vCPUs 1 vCPUs Memory 8192 MB 6144 MB 2048 MB 4096 MB 2048 MB 512 MB Testing used a total of 6 tiles between 2 ESX hosts in the resource cluster. There were a total of 48 virtual machines, 126 virtual CPUs, and 159GB of configured virtual machine memory. Each tile is controlled by a single client. Each tile client is controlled by a primary client. The clients ran in other hosts, outside of the ESX hosts for the workload virtual machines. Table 12 shows how the tiles were distributed.

Table 12. Tile Distribution

Tile 1 Tile 2 Tile 3 Tile 4 Tile 5 Tile 6 ESX Host 2 3 2 3 2 3 Resource Pool

Gold organization vDC

Gold organization vDC

Silver organization vDC

Silver organization vDC

Bronze organization vDC

Bronze organization vDC

Storage Gold Datastore (Tier 1)

Gold Datastore (Tier 1)

Silver Datastore (Tier 2)

Silver Datastore (Tier 2)

Bronze Datastore (Tier 3)

Bronze Datastore (Tier 3)

Since only two ESX hosts were used, the dynamic provisioning pool for each organization virtual data center was resized to match a 2 ESX host configuration.

Table 13 has the datastore storage configuration.

Table 13. Organization Datastore Storage Configuration

Gold Datastore (Tier 1)

Silver Datastore (Tier 2)

Bronze Datastore (Tier 3)

Disk Type 146 GB 15K SAS 300 GB 10K SAS 300 GB 15K SAS Parity Groups in HDP 2 × RAID 5 (7D+1P) 2 × RAID 5 (7D+1P) 2 × RAID 5 (7D+1P) Number of LUN 1 1 1 To simulate the workloads for each organization virtual data center, 2 tiles was placed in each organizational virtual data center’s resource pool along with the datastore mapped for each virtual data center. This ensures that the virtual machines in each tile are bound by resources allocated for each organizational virtual data center. The tests captured performance metrics for compute, storage, and application throughout.

The workloads ran in two different occasions and two different VMware vCloud resource allocation pool configurations, not over committed and over committed.

Page 26: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

25

Table 14 shows allocation pool where the resources are 100% committed for the CPU and memory resources allocations. Resource usage will not exceed the 100% allocation of the CPU and memory resources.

Table 14. Non-Over Committed Allocation Pool

Gold Level Organization Silver Level Organization Bronze Level Organization CPU 100% of 27GHz 100% of 21GHz 100% of 17GHz Memory 100% of 46GB 100% of 44GB 100% of 41GB Table 15 shows the allocation pool configuration that allow over commitment.

Table 15. Over Committed Allocation Pool

Gold Level Organization Silver Level Organization Bronze Level Organization CPU 39% of 67.75GHz 31% 67.75GHz 25% of 67.75GHz Memory 34% of 134GB 33% of 134GB 31% of 134GB The maximum CPU and memory resources available to each organization are 67.75GHz for the CPU and 134GB for memory. This maximum is the result of using only 2 ESX hosts in the resource cluster. Percentage of CPU and memory resource guaranteed to each organization was equivalent to that in Table 15. However, each organization can potentially go beyond their allocation up to the maximum of 67.75GHz for the CPU and 134GB for memory.

In both allocation pool configurations, running two tiles per organization tests the virtual data center and virtual machine memory. Total memory configured for virtual machines for 2 tiles is 54GB. Each organization has an over allocation of approximately 8GB for the Gold organization, 10GB for the Silver organization, and 13GB for the Bronze organization.

The non-over committed allocation pool does not have enough memory resources to meet the total demand. The over committed allocation pool can allow the allocation of the configured virtual machine memory when it goes beyond the reserved percentage. However resources beyond the reserved percentage are not guaranteed.

Page 27: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

26

Compute Performance Performance for the compute resources were captured for the over committed and non-over committed allocation pools.

Figure 11 shows the CPU and memory utilization of the non-over committed allocation pool by organization virtual data center.

Figure 11

In the non-over committed allocation pool, the average resources used by each organization’s virtual data center closely resembles the allocation listed in Table 14, but never exceeded 100% allocation. Since virtual machine memory was over allocated, there is ballooning of the memory. Of the three organization’s virtual data centers, Gold has the least amount of memory ballooning, as expected.

Figure 12 shows the CPU and memory utilization of the over committed allocation pool by organization virtual data center.

05

101520253035404550

GHz Used Ballon (GB) Consumed (GB)

CPU Memory

Organization vDC - CPU and MemoryNon Over Committed Allocation Pool

Gold Silver Bronze

Page 28: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

27

Figure 12

Figure 12 shows the organization virtual data centers’ CPU and memory utilization in the over committed allocation pool, as described in Table 15. In this configuration, each organization’s virtual data center is allowed to go beyond its percentage of committed resources when the uncommitted resources are available.

05

101520253035404550

GHz Used Ballon (GB) Consumed (GB)

CPU Memory

Organization vDC - CPU and MemoryOver Committed Allocation Pool

Gold Silver Bronze

Page 29: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

28

Figure 13 shows the ESX CPU and memory utilization for the non-over committed allocation pool by organization.

Figure 13

Figure 13 shows the average ESX CPU and memory utilization in the non-over committed allocation pool. There are still some CPU and memory resources available that have not been utilized.

0.74 0.670

10

20

30

40

50

60

70

80

90

100

Percent Used GHz Used Percent Used Balloned (GB) Consumed (GB) Avg. Swap (GB)

CPU Memory

ESX - CPU and MemoryNon Over Committed Allocation Pool

ESX2 ESX3

Page 30: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

29

Figure 14 shows the ESX CPU and memory utilization for the over committed pool by organization.

Figure 14

Figure 14 shows the average ESX CPU and memory utilization in the over committed allocation pool. . Because the resource allocation pool was dynamically changed from non-over committed to over committed configuration, the memory swap and ballon usage have decreased. This shows the advantage of allowing over commitment of resources. ESX hosts have the ability to ensure reserved resources are met. With the over committed allocation pool, more of the hardware resources is utilized.

These results shows that using a bigger CPU and memory resource allocation pool while committing only certain percentage of resource pool to each organization’s virtual data center can increase hardware utilization.

Storage Performance This analyzes the storage performance of this reference architecture. Performance in storage resources was captured in the over committed and non-over committed allocation pool test configurations.

ESX Storage Performance Storage performance, as reported by ESX, was analyzed for each organization’s VMFS datastore. This was performed in the over committed allocation pool and non-overcommitted allocation pool.

Figure 15 shows IOPS by ESX for the non-over committed allocation pool by organization.

0.50 0.430

10

20

30

40

50

60

70

80

90

100

Percent Used GHz Used Percent Used Balloned (GB) Consumed (GB) Avg. Swap (GB)

CPU Memory

ESX - CPU and MemoryOver Committed Allocation Pool

ESX2 ESX3

Page 31: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

30

Figure 15

Figure 15Figure 15 shows the IOPS on each ESX host for each datastore used by each organization’s virtual data center in a non-over committed allocation pool. The I/O distribution between the hosts is fairly even.

Figure 16 shows IOPS by ESX for the over committed allocation pool by organization

Figure 16

Figure 16 shows the IOPS on each ESX host for each datastore used by each organization’s virtual data center in an over committed allocation pool. I/O distribution between all hosts is fairly even.

The results shows that comparison between the non-over committed allocation pool and the over committed allocation pool, the over committed pool generates more IOPS due to more computing resources available for each organization’s virtual data center.

0200400600800

100012001400

Gold Silver Bronze Gold Silver Bronze Gold Silver Bronze

IOPS Reads/Sec Writes/Sec

ESX - IOPSNon Over Committed Allocation Pool

ESX2 ESX3

0200400600800

100012001400

Gold Silver Bronze Gold Silver Bronze Gold Silver Bronze

IOPS Reads/Sec Writes/Sec

ESX - IOPSOver Committed Allocation Pool

ESX2 ESX3

Page 32: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

31

Virtual Storage Platform Performance Since the over committed allocation pool generates more IOPS and stresses the storage system more, the over committed allocation pool is the focus of this analysis.

Figure 17 shows the IOPS for each LDEV in the over committed allocation pool on the Hitachi Virtual Storage Platform.

Figure 17

Figure 17 shows the IOPS produced by LDEV for each datastore in the over committed allocation pool. The IOPS shown here is nearly identical to the combined IOPS of the 2 ESX hosts shown in Figure 16. This indicates very little I/O overhead between the ESX host and the Hitachi Virtual Storage Platform.

0

500

1000

1500

2000

2500

3000

Gold Silver Bronze

VSP - LDEV IOPSOver Committed Allocation Pool

Read IOPS

Write IOPS

Page 33: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

32

VSP LDEV Random IOPS — Over Committed Allocation Pool

Figure 18

Figure 18 shows the I/O profile seen by the Hitachi Virtual Storage Platform in the committed allocation pool. Due to mixed workloads from each organization’s virtual data center, the I/O profile is nearly 100% randomized.

Figure 19 VSP Cache Write Pending — Over Committed Allocation Pool

Figure 19

0

500

1000

1500

2000

2500

3000

Gold Silver Bronze

VSP - LDEV Random IOPSOver Committed Allocation Pool

Random Read IOPS

Random Write IOPS

24.0

24.5

25.0

25.5

26.0

26.5

27.0

27.5

28.0

Gold Silver

Perc

ent

VSP - Cache Write PendingOver Committed Allocation Pool

Page 34: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

33

With highly randomized I/O workloads, the back-end disk subsystem can be stressed. If there are not enough disks to handle all the inflow of I/O from the cache, cache write pending can be a problem. Cache write pending above 30% can result in emergency de-stage, which can impact overall performance. Figure 19 shows that cache write pending for both Gold and Silver is below 30%. Since cache mode was disabled for the Bronze external storage, cache write pending would not be on the Hitachi Virtual Storage Platform but on the external storage.

The Virtual Storage Platform performance results show that the disk configuration is adequately sized to handle the highly randomized I/O generated from the general workloads in each tile.

Storage Latency Storage latency was analyzed at the ESX host level using esxtop.

Figure 20 shows the storage latency of the over committed allocation pool on the ESX hosts.

Figure 20

Since an over committed allocation pool generates more IOPS, latency is important to analyze. Figure 20 shows the latency measured at the ESX host level for the over committed allocation pool. Even with the higher IOPS, latency on the internal storage for the Hitachi Virtual Storage Platform (Gold and Silver) show very low latency. The Bronze external virtualized storage still shows reasonable response times, even without enabling the cache mode.

If storage latency for an organization virtual data center needs improvement, these options are available.

Enabling cache mode and dedicating a cache partition for external LDEVs — This reduces latency due to the asynchronous writes between the Virtual Storage Platform and the external storage. However, careful sizing the resizing the cache partitions will be necessary.

Migrating the organization’s provider virtual data center storage — To efficiently provide storage resources to a provider virtual data center, keep all datastores in a provider virtual data center at the same tier.

0

5

10

15

20

25

Gold Silver Bronze Gold Silver Bronze

Device Latency (ms) Guest Latency (ms)

ESX - Storage LatencyOver Committed Allocation Pool

ESX2 ESX3

Page 35: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

34

In a use case of migrating a virtual machine for tiering purpose, VMware Storage vMotion provides a granular method to migrate virtual machines from one tier of VMFS datastore to another datastore. VMFS datastores within a provider virtual data center are used in a round-robin fashion.

Tier storage migrations with VMware Storage vMotion involve adding and removing VMFS datastores during a migration. Hitachi Tier Storage Manager can migrate entire LDEVs to any LDEV within the Hitachi Virtual Storage Platform, including LDEVs from virtualized external storage and LDEVs from a dynamic provisioning pool. LDEVs used for a single provider virtual data center can be grouped in migration groups and migrated at block level. No additional configuration management is necessary in VMware vSphere and VMware vCloud Director, since this is completely transparent to ESX hosts.

Application Performance This analyzes the application performance of this reference architecture. Performance of each application was captured in over committed and non-over committed allocation pool test configurations.

Figure 21 shows the application workload throughput for non-over committed allocation pool.

Figure 21

Figure 21 shows the scores from each workload of the six tiles in a non-over committed allocation pool. Mailserver workload shows identical actions per minute, while other workload follows the trend of the resource allocated to each organization’s virtual data center.

Figure 22 shows the application workload quality of service for the non-over committed allocation pool.

0500

100015002000250030003500400045005000

Mailserver (actions/min)

Olio (operations/min)

DVDStoreA (transactions/min)

DVDStoreB (transactions/min)

DVDStoreC (transactions/min)

Application Workload ThroughputNon Over Committed Allocation Pool

Gold - Tile 0

Gold - Tile 1

Silver - Tile 2

Silver - Tile 3

Bronze - Tile 4

Bronze - Tile 5

Page 36: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

35

Figure 22

Figure 22 shows the quality of service (QoS) metric from each workload in a non-over committed allocation pool. The mail server shows dramatic increase in latency going from the Gold to the Bronze organization’s virtual data center. Even though the mail server’s actions per minute was identical across all the organization’s virtual data center, the quality of service indicates users of the Gold organization virtual data center have a better experience than that of the Bronze organization’s virtual data center. The DVD store application simulates OLTP workload shows a drastic increase in latency going from the Gold to the Bronze organization’s virtual data center.

Figure 23 shows the application workload throughput for the over committed allocation pool.

0

500

1000

1500

2000

2500

3000

3500

4000

Mailserver Olio DVDStoreA DVDStoreB DVDStoreC

Application Workload QOSNon Over Committed Allocation Pool

Gold - Tile 0 QoS(ms)

Gold - Tile 1 QoS(ms)

Silver - Tile 2 QoS(ms):

Silver - Tile 3 QoS(ms):

Bronze - Tile 4 QoS(ms):

Bronze - Tile 5 QoS(ms):

Page 37: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

36

Figure 23

Figure 23 shows the score from each workload of the six tiles in an over committed allocation pool. Compared to the non-over committed allocation pool, the scores for the Silver organization’s and the Bronze organization’s virtual data center have a significant improvement, while the Gold organization’s virtual data center scores remained the same. This indicates the committed resources in the Gold organization’s virtual data center are enough for its computing and storage needs.

This also indicates that the over commitment significantly improves each organization’s virtual data center that is in need of compute resources without affecting the already committed resources.

Figure 24 shows the application workload quality of service for the over committed allocation pool.

0500

100015002000250030003500400045005000

Mailserver (actions/min)

Olio (operations/min)

DVDStoreA (transactions/min)

DVDStoreB (transactions/min)

DVDStoreC (transactions/min)

Application Workload ThroughputOver Committed Allocation Pool

Gold - Tile 0

Gold - Tile 1

Silver - Tile 2

Silver - Tile 3

Bronze - Tile 4

Bronze - Tile 5

Page 38: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

37

Figure 24

Figure 24 shows the quality of service (QoS) metric from each workload in an over committed allocation pool. Compared to the non-over committed allocation pool, the quality of service was significantly improved for the Silver organization’s and the Bronze organization’s virtual data centers. However, the lower tier of storage puts limits on the quality of service of the Bronze organization’s virtual data center.

The results show that setting tier storage and committing percentage of compute resources for an organization’s virtual data center can be used to set certain levels of service. Increasing the CPU and memory allocation pool but maintaining the absolute values for the amount of CPU and memory committed can drastically improve performance, while still maintaining a guaranteed level of resources.

Conclusion Using VMware vCloud Director with Hitachi Virtual Storage Platform, infrastructure for organizations and application can share a common vSphere infrastructure where all resources are pooled providing elastic resource pools with on-demand capacity. Through the practice of mapping dedicated Hitachi Dynamic Provisioning pool with provider virtual data centers, different tiers of performance can be achieved for organizational virtual data centers. Pooling resources can increase hardware utilization thereby increasing application performance. With increased application performance with hardware utilization, greater server consolidation can be achieved.

To achieve elastic resource pool with on-demand capacity, vCloud Director introduces resource allocation model to each organization virtual data center where the allocation can be dynamically increased or decreased. These resource allocations is mapped to the vSphere resource pools, where total resources can be increased or decreased by adding or removing ESX hosts into the DRS cluster. To be in sync with the elastic resource pools and on-demand capacity of vCloud Director, Hitachi Virtual Storage Platform pools storage resources through Hitachi Dynamic Provisioning. Disk resources to the Hitachi Dynamic Provisioning can easily increased or decreased by adding or removing parity groups into the pool. This can be done dynamically without interruption to virtual machines.

0

500

1000

1500

2000

2500

3000

3500

4000

Mailserver Olio DVDStoreA DVDStoreB DVDStoreC

Application Workload QoSOver Committed Allocation Pool

Gold - Tile 0 QoS(ms)

Gold - Tile 1 QoS(ms)

Silver - Tile 2 QoS(ms):

Silver - Tile 3 QoS(ms):

Bronze - Tile 4 QoS(ms):

Bronze - Tile 5 QoS(ms):

Page 39: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

38

In addition Hitachi Tier Storage Manager allows migration of LDEVs to anywhere within the Hitachi Virtual Storage Platform storage system. This includes external storage virtualized within Hitachi Virtual Storage Platform storage system. The migration of the LDEVs can be done non-disruptively to applications running with virtual machines. LDEVs used for a single provider virtual data center can be grouped in Migration Groups and migrated at block level. No additional configuration management is necessary in vSphere and vCloud Director since this is completely transparent to ESX hosts.

Hitachi Tier Storage Manager can migrate entire LDEVs to any LDEVs within the Hitachi Virtual Storage Platform including LDEVs from virtualized external storage and LDEVs from Hitachi Dynamic Provisioning Software pool. LDEVs used for a single provider virtual data center can be grouped in Migration Groups and migrated at block level. No additional configuration management is necessary in vSphere and vCloud Director since this is completely transparent to ESX hosts.

Hitachi Data Systems Global Services offers experienced storage consultants, proven methodologies and a comprehensive services portfolio to assist you in implementing Hitachi products and solutions in your environment. For more information, see the Hitachi Data Systems Global Services Hitachi Data Systems Global Services web site.

Live and recorded product demonstrations are available for many Hitachi products. To schedule a live demonstration, contact a sales representative. To view a recorded demonstration, see the Hitachi Data Systems Corporate Resources web site. Click the Product Demos tab for a list of available recorded demonstrations.

Hitachi Data Systems Academy provides best-in-class training on Hitachi products, technology, solutions and certifications. Hitachi Data Systems Academy delivers on-demand web-based training (WBT), classroom-based instructor-led training (ILT) and virtual instructor-led training (vILT) courses. For more information, see the Hitachi Data Systems Academy Hitachi Data Systems Academy web site.

For more information about Hitachi products and services, contact your sales representative or channel partner or visit the Hitachi Data Systems Hitachi Data Systems web site.

Appendix — Terminology These terms are introduced with the use of VMware vCloud Director:

Virtual Datacenter (vDC) A virtual datacenter is an allocation mechanism for resources such as networks, storage, CPU, and memory. In a virtual datacenter, computing resources are fully virtualized and can be allocated based on demand, service level requirements, or a combination of demand and service level requirements.

Provider Virtual Datacenter A provider virtual datacenter contains all the resources available from the vCloud service provider. vCloud system administrators create and manage the provider virtual datacenters.

Organization virtual datacenter An organization virtual datacenter provides an environment where virtual systems can be stored, deployed, and operated. Also, this provides storage for virtual media, including floppy disks and CD-ROMs. An organization administrator specifies how resources from a provider virtual datacenter are distributed to the virtual datacenters in an organization.

Page 40: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

39

vCloud Cell A vCloud cell is an instance of the vCloud Director server.

vApp A vApp encapsulates one or more virtual machines, including their inter-dependencies and resource allocations. This allows for single-step power operations, cloning, deployment, and monitoring of tiered applications spanning multiple virtual machines.

Page 41: Vmware Vcloud Director With HDS Compute Blade 2K & VSP

40

Hitachi is a registered trademark of Hitachi, Ltd., in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd., in the United States and other countries. All other trademarks, service marks and company names mentioned in this document are properties of their respective owners. Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems Corporation

© Hitachi Data Systems Corporation 2011. All Rights Reserved. AS-100-00 August 2011

Corporate Headquarters Regional Contact Information 750 Central Expressway, Americas: +1 408 970 1000 or [email protected] Santa Clara, California 95050-2627 USA Europe, Middle East and Africa: +44 (0) 1753 618000 or [email protected] www.hds.com Asia Pacific: +852 3189 7900 or [email protected]