deploy hitachi unified compute platform select for … · deploy hitachi unified compute platform...

63
January 8, 2013 By Reference Architecture Guide Deploy Hitachi Unified Compute Platform Select for VMware View with VMware vSphere 5 Tim Darnell

Upload: vantram

Post on 25-Jul-2018

247 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

January 8, 2013

By

Reference Architecture Guide

Deploy Hitachi Unified Compute Platform Select for VMware View with VMware vSphere 5

Tim Darnell

Page 2: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

FeedbackHitachi Data Systems welcomes your feedback. Please share your thoughts by sending an email message to [email protected]. To assist the routing of this message, use the paper number in the subject and the title of this white paper in the text.

Page 3: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

Table of ContentsSolution Overview.......... .....................................................................................3

Key Solution Component....................................................................................4

Hardware Components.......... ...................................................................4Software Components.......... ....................................................................5

Solution Design.......... .........................................................................................7

SAN Architecture.......... ............................................................................8Storage Architecture.......... .......................................................................8

Engineering Validation......................................................................................10

Test Methodology.......... .........................................................................10Test Scenarios........................................................................................11Test Results............................................................................................12

Conclusion.......... ...............................................................................................20

Page 4: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

1

1

Deploy Hitachi Unified Compute Platform Select for VMware View with VMware vSphere 5Reference Architecture Guide

This reference architecture guide describes deploying a Hitachi Unified Compute Platform Select for VMware View solution that scales from hundreds to thousands of desktops.This guide provides information to plan and deploy linked clone desktops in a VMware View 5.1 environment using the following:

Hitachi Unified Storage VM

Hitachi Compute Blade 500

VMware vSphere 5.0

Hitachi Unified Compute Platform is a family of integrated and flexible reference solutions. Each Unified Compute Platform solution, configured for immediate deployment, runs top tier infrastructure applications without purchasing or provisioning unnecessary equipment. The entire solution is stack certified and compatible.

Prior to production deployment in your environment, run a VMware View pilot program to gather sizing and IOPS information for production environment planning purposes.

This reference architecture guide is for virtualization or desktop engineers that need to implement a linked cloned desktop environment. You need a working familiarity with techniques and practices used for the products listed in this guide.

Note — Testing of this configuration was in a lab environment. Many things affect production environments beyond prediction or duplication in a lab environment. Follow the recommended practice of conducting proof-of-concept testing for acceptable results in a non-production, isolated test environment that otherwise matches your production environment before your production implementation of this solution.

Page 5: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

2

2

Solution OverviewThis reference architecture guide uses the following:

Hitachi Unified Storage VM

Hitachi Compute Blade 500

VMware View 5.1.1

VMware vSphere 5.0

Brocade 6510 enterprise fabric switches

Brocade VDX-6720 Ethernet switches

Determine User WorkloadAn important factor when sizing a VMware View environment for acceptable end-user performance is to define the typical workload profile for the end-user that will be using the environment. The workloads used for sizing this solution are based on View Planner from VMware.

This reference architecture uses a Microsoft Windows 7 32-bit desktop running a knowledge user workload for defining sizing requirements. The workload represents an average knowledge-based user running a common set of applications. Table 1 lists the applications that View Planner exercises during workload testing.

Table 1. Workload Application Definition

Workload Type Knowledge-based user

Applications Exercised

Adobe Acrobat Reader

Microsoft Excel

Microsoft Internet Explorer

Mozilla Firefox

Microsoft Outlook

Microsoft PowerPoint

Media Player

Microsoft Word

Page 6: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

3

3

Linked clone sizing was performed based upon compute and memory resources available on Hitachi Compute Blade 500. Table 2 lists the recommendations for desktop sizing, based on high-density or highly-available configurations.

*The IOPS estimates are an average of when the user is logged onto the virtual desktop and working. It does not take into account desktop start-up, user logon operations, and user logoff operations.

For the purposes of this reference architecture guide, all sizing and testing was done using a knowledge user workload averaging four to seven IOPS per desktop during steady state.

Note — Hitachi Data Systems recommends that you perform in-depth testing to determine the correct resource requirements of each type of end user in a production VDI environment.

Table 2. Virtual Desktop Sizing

Workload Type Knowledge-based user

Operating System Microsoft Windows 7, 32-bit

vCPU Allocation 1

Memory Allocation per User 1 GB

Average Steady State IOPS* 4 to 7

High-Density Users per Core 15.6

Highly Available Users per Core 7.8

Page 7: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

4

4

Logical DesignFigure 1 illustrates the high-level logical design of this reference architecture on Hitachi Unified Storage VM and Hitachi Compute Blade 500.

Figure 1

The solution in this reference architecture guide supports up to 2,000 Microsoft Windows 7, 32-bit, linked clone desktops with 1 GB of RAM running a knowledge user workload. Scale out this architecture to support thousands of desktops using the Hitachi Data Systems cell design.

Page 8: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

5

5

Solution ComponentsThese are the major components used in this solution.

Hitachi Unified Storage VMHitachi Unified Storage VM is an entry-level enterprise storage platform. It combines storage virtualization services with unified block, file, and object data management. This versatile, scalable platform offers a storage virtualization system to provide central storage services to existing storage assets.

Unified management delivers end-to-end central storage management of all virtualized internal and external storage on Unified Storage VM. A unique, hardware-accelerated, object-based file system supports intelligent file tiering and migration, as well as virtual NAS functionality, without compromising performance or scalability.

The benefits of Unified Storage VM are the following:

Enables the move to a new storage platform with less effort and cost when compared to the industry average

Increases performance and lowers operating cost with automated data placement

Supports scalable management for growing and a complex storage environment while using fewer resources

Achieves better power efficiency with more storage capacity for more sustainable data centers

Lowers operational risk and data loss exposure with data resilience solutions

Consolidates management with end-to-end virtualization to prevent virtual server sprawl

Hitachi Compute Blade 500Hitachi Compute Blade 500 combines the high-end features with the high compute density and adaptable architecture you need to lower costs and protect investment. Safely mix a wide variety of application workloads on a highly reliable, scalable, and flexible platform. Add server management and system monitoring at no cost with Hitachi Compute Systems Manager, which can seamlessly integrate with Hitachi Command Suite in IT environments using Hitachi storage.

Page 9: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

6

6

Hitachi Dynamic ProvisioningOn Hitachi storage systems, Hitachi Dynamic Provisioning provides wide striping and thin provisioning functionalities.

Using Dynamic Provisioning is like using a host-based logical volume manager (LVM), but without incurring host processing overhead. It provides one or more wide-striping pools across many RAID groups. Each pool has one or more dynamic provisioning virtual volumes (DP-VOLs) of a logical size you specify of up to 60 TB created against it without allocating any physical space initially.

Deploying Dynamic Provisioning avoids the routine issue of hot spots that occur on logical devices (LDEVs). These occur within individual RAID groups when the host workload exceeds the IOPS or throughput capacity of that RAID group. Dynamic provisioning distributes the host workload across many RAID groups, which provides a smoothing effect that dramatically reduces hot spots.

When used with Hitachi Unified Storage VM, Hitachi Dynamic Provisioning has the benefit of thin provisioning. Physical space assignment from the pool to the dynamic provisioning volume happens as needed using 42-MB pages, up to the logical size specified for each dynamic provisioning volume. There can be a dynamic expansion or reduction of pool capacity without disruption or downtime. You can rebalance an expanded pool across the current and newly added RAID groups for an even striping of the data and the workload.

VMware View 5.1VMware View 5.1 provides virtual desktops as a managed service. Using View, you can create images of approved desktops and then deploy them automatically, as needed. Desktop users access their personalized desktop, including data, applications, and settings from anywhere with network connectivity to the server. PCoIP, a high performance display protocol, provides enhanced end-user experience compared to traditional remote display protocols.

VMware View 5.1 introduces a new feature called VMware View Accelerator. VMware View Accelerator off loads commonly read blocks into hypervisor host memory, reducing the I/O seen at the back end storage subsystem. This reduces time to completion for read intensive operations in the environment such as boot storms and anti-virus storms.

Page 10: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

7

7

VMware vSphere 5.0VMware vSphere 5 is a virtualization platform that provides a datacenter infrastructure. It features vSphere Distributed Resource Scheduler (DRS), High Availability, and fault tolerance.

VMware vSphere 5 has the following components:

ESXi 5.0 — A hypervisor that loads directly on a physical server. It partitions one physical machine into many virtual machines that share hardware resources.

vCenter Server — Management of the vSphere environment through a single user interface. With vCenter, there are features available such as vMotion, Storage vMotion, Storage Distributed Resource Scheduler, High Availability, and Fault Tolerance.

Brocade SwitchesBrocade and Hitachi Data Systems partner to deliver storage networking and data center solutions. These solutions reduce complexity and cost, as well as enable virtualization and cloud computing to increase business agility.

The solution uses the following Brocade products:

Brocade 5460 8 Gb SAN switch for Hitachi Compute Blade 500

Brocade 6510

Brocade VDX 6720 switch

Brocade VDX 6746

Page 11: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

8

8

Solution DesignThis Hitachi Unified Compute Platform Select for VMware View reference architecture uses a cell-based architecture that provides packaged components necessary to build a solution. The cell based architecture defines the compute, network, and storage resources necessary to support a defined workload.

Use the Hitachi Unified Compute Platform Select design to implement a solution that scales in a cost-effective manner to meet your changing business needs quickly. Depending on density or availability requirements, scale from 250 to 6,000 desktops using a single Hitachi Unified Storage VM storage subsystem and multiple Hitachi Compute Blade 500 chassis.

Figure 2 illustrates a maximum-density cell footprint which supports 6,000 knowledge-based users.

Figure 2

Page 12: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

9

9

The architecture consists of preconfigured cells designed to support a defined user workload. All cell sizing targets are for high density environments.

The minimum cell configuration required for Hitachi Unified Compute Platform Select for VMware View consists of:

Infrastructure cell for compute resources — Foundation for compute components

Infrastructure cell for storage resources — Foundation for storage components

Application cell for VMware View linked clones — Resources for hosting VMware View Linked Clone desktops

Resource cell for VMware View replicas — Resources for hosting VMware View Linked Clone replica disks

Figure 3 illustrates the minimum cell configuration required as shown on the 6,000 user footprint.

Figure 3

Page 13: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

10

10

Optional cells for additional functionality or performance within the solution includes the following:

Application cell for Unified Compute Platform Select management — Resources for hosting specific management services for VMware vSphere and VMware View

This cell is required only if no VMware vSphere management environment exists, or if specific application management services need to be isolated.

Expansion cell for compute resources — Resources for scaling out application cells

Expansion cell for storage resources — Additional expansion tray for disk-based resource cells

The required and optional cells make up this Hitachi Unified Compute Platform Select for VMware View solution to provide the needed compute and storage hardware.

Page 14: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

11

11

Required Solution CellsThis solution requires the following cells.

Infrastructure Cell for Compute ResourcesThe infrastructure cell for compute resources provides the foundation for the compute components needed to start building a scalable VMware View solution. Figure 4 illustrates the individual components within the infrastructure cell for compute resources with its location in the 6,000 user footprint.

Figure 4

Use the infrastructure cell for compute resources in conjunction with the following cells:

Infrastructure cell for storage resources

Application cell for VMware View linked clones

Resource cell for VMware View replicas

Application cell for Hitachi Unified Compute Platform Select management

Expansion cell for compute resources

The infrastructure cell for compute resources supports up to two expansion cells for Hitachi Compute Blade 500 (three chassis total) before requiring a new infrastructure cell for compute resources.

Page 15: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

12

12

Note — This solution requires the deployment of an infrastructure cell for storage resources with each infrastructure cell for compute.

Table 3 lists the individual components of the infrastructure cell for compute resources used in this reference architecture.

Chassis ComponentsThe Hitachi Compute Blade 500 chassis is equipped with redundant management modules. This provides high availability access to manage and monitor the chassis, switch modules, and blades. The chassis contains redundant switch modules for high availability and maximum throughput. Hot swappable power and fan modules allow for non-disruptive maintenance.

Network InfrastructureThe network design used in this solution provides the following:

Ample bandwidth and redundancy for a fully populated infrastructure cell for compute

An infrastructure cell for storage

Up to two expansion cells for Hitachi Compute Blade 500

Table 3. Infrastructure Cell for Compute Resources Components

Hardware Detail Description Quantity

Hitachi Compute Blade 500 chassis

2 Brocade VDX6746 DCB switch modules

2 Brocade 5460 6-port 8 Gb/sec Fibre Channel switch modules

2 chassis management modules

6 cooling fan modules

4 power supply modules

1

Ethernet Switch Brocade VDX6720-60, 10 Gb/sec 60-port Ethernet switch 2

Fibre Channel Switch Brocade 6510-48, 8 Gb/sec 48-port Fibre Channel switch 2

Page 16: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

13

13

Figure 5 illustrates the physical network configuration of the infrastructure cell for compute resources.

Figure 5

Page 17: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

14

14

The network design utilizes the advanced features of the Brocade VDX switch family, such as VCS fabric technology. This provides:

Non-stop networking

Simplified, automated networks

An approach that protects existing IT investments

See Brocade VCS Fabric Technology for more information.

SAN InfrastructureThe Hitachi Unified Storage VM controller used for this solution has 16 ports for connections to the Brocade 6510 enterprise fabric switches.

For this reference architecture, the infrastructure cell for compute resources was zoned to four ports on the Hitachi Unified Storage VM controller. When adding expansion cells for Hitachi Compute Blade 500 to the solution, zone them to four new open ports.

Figure 6 on page 15 illustrates the physical SAN configuration of the infrastructure cell for compute.

Page 18: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

15

15

Figure 6

Page 19: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

16

16

Infrastructure Cell for Storage ResourcesThe infrastructure cell for storage resources contains all of the base storage hardware required to start building this solution.

Figure 7 illustrates the individual components within the infrastructure cell for storage resources and its location in the 6,000 user footprint.

Figure 7

Use an infrastructure cell for storage in conjunction with the following cells:

Infrastructure cell for compute resources

Application cell for VMware View linked clones

Resource cell for VMware View replicas

Application cell for Hitachi Unified Compute Platform Select management

Expansion cell for storage resources

Note — You must deploy an infrastructure cell for compute resources with each infrastructure cell for storage resources.

Page 20: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

17

17

The infrastructure cell for storage provides the storage infrastructure for all of the cells in the solution. After fully utilizing an infrastructure cell for storage, add additional infrastructure cells for storage to scale out the solution.

Table 4 shows the components of the infrastructure cell for storage resources.

Each infrastructure cell for storage can support up to twelve application cells for VMware View linked clones.

The infrastructure cell for storage houses the following for this solution:

Application or resource cells

Host spare drives

Table 4. Infrastructure Cell for Storage Resources Components

Hardware Detail Description Quantity

Hitachi Unified Storage VM controller

Dual controllers and Fibre Channel modules

16 × 8 Gb/sec Fibre Channel ports

64 GB cache

1

Hitachi Unified Storage SFF disk expansion tray

Zero disks 1

Page 21: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

18

18

Application Cell for VMware View Linked ClonesThe application cell for VMware View linked clones contains all compute and storage components necessary to support up to 500 linked clone desktops for knowledge-based users.

Figure 8 illustrates the individual components within the application cell for VMware View linked clones and its location in the 6,000 user footprint.

Figure 8

Use an application cell for VMware View linked clones in conjunction with the following cells:

Infrastructure cell for compute resources

Infrastructure cell for storage resources

Resource cell for VMware View replicas

Expansion cell for compute resources

Expansion cell for storage resources

Add the compute components of the application cell for VMware View linked clones to the infrastructure cell for compute resources, and the storage components to the infrastructure cell for storage resources to start building a scalable environment. Each application cell for VMware View linked clones supports up to 500 knowledge-based users.

Page 22: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

19

19

To scale out the solution, add additional application cells for VMware View linked clones to your infrastructure cell for compute resources or expansion cell for Hitachi Compute Blade 500 to increase capacity. Up to 12 application cells for VMware View linked clones can be supported by a single infrastructure cell for compute resources and infrastructure cell for storage resources before requiring new compute and storage infrastructure cells.

Compute InfrastructureThe application cell for VMware View linked clones supports a maximum density of 500 knowledge-based desktops per cell. However, in a high density configuration, a cell cannot support the failover of desktops in the case of a server blade failure.

Enable VMware View Accelerator for desktop pools to reduce the read I/O seen on the Hitachi Unified Storage VM. The maximum amount of memory on the ESXi hosts that can be dedicated to VMware View Accelerator is 2 GB. The 520HB1 server blades included in the application cell for VMware View linked clones contain enough memory to dedicate the maximum memory to VMware View Accelerator.

To design for high availability, create a dedicated High Availability and Distributed Resource Scheduler cluster and place the hosts from each application cell into the cluster. This ensures the separation of resources from management and other workloads for optimal hypervisor efficiency and desktop performance.

To ensure that a minimum of one host is available for High Availability resources, reduce the number of desktops in the cluster by 250. If you require additional High Availability host capacity, continue to reduce desktops in sets of 250 for each host required for a High Availability resource allocation.

Based on VMware View Composer maximums, each High Availability and Distributed Resource Scheduler cluster can support up to four application cells for VMware View linked clones (8 hosts).

Table 5 on page 20 lists the components of the application cell for VMware View linked clones.

Page 23: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

20

20

Table 8 lists the configuration of the Microsoft Windows 7 virtual machine used for the linked clone desktops.

Table 7 lists the software components in the application cell for VMware View linked clones.

Table 5. Application Cell for VMware View Linked Clones Components

Hardware Detail Description Quantity

520HB1 server blade 2 × 6-core Intel Xeon E5-2640 2.5 GHz processors

256 GB RAM

1 Emulex 2-port 10 GbE onboard CNA card

1 Emulex 2-port 8 Gb Fibre Channel mezzanine card

2

SFF Disk Drives

600 GB 10k RPM SAS drives

Configured as RAID-10 (2D+2D)

Installed in the disk tray for this cell

24

Hot spare

Installed in the infrastructure cell for storage resources disk tray

1

Hitachi Unified Storage SFF disk expansion tray

Added to the infrastructure cell for storage resources 1

Table 6. Microsoft Windows 7 Virtual Machine Configuration

Virtual Machine Configuration Option Value

Operating System Microsoft Windows 7, 32-bit

Number of CPUs 1

Memory 1024 MB

Operating system disk size 24 GB

Table 7. Application Cell for VMware View Linked Clones Software Components

Software Version

VMware View Agent (32-bit) 5.1.1.799444

VMware Tools 8.6.5-621624

VMware ESXi 5.0.0 Build 608089

Microsoft Windows 7 32-bit Enterprise

Page 24: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

21

21

Network InfrastructureConfigure each of the 520HB1 server blades with a single onboard two channel 10 GbE CNA card for network traffic. Split each CNA card into four logical NICs per channel, for a total of eight NICs per blade.

For the purpose of this design, only use three NICs per channel. This allows maximum bandwidth for the desktop network.

Add each pair of vmnics as an active physical adapter in its respective vSwitch. This allows for redundancy in the network fabric in case of blade chassis switch module or upstream switch failure.

Set the bandwidth allocation for each NIC as follows:

Channel 0 and 1 NIC 0 (vmnic0 and vmnic1)

Virtual machine management network

VMKernel management network vSwitch 0

1 Gb per NIC, for a total of 2 Gb

Channel 0 and 1 NIC 1 (vmnic2 and vmnic3)

VMware vMotion network

VMKernel vMotion network vSwitch 1

2 Gb per NIC, for a total of 4 Gb

Channel 0 and 1 NIC 2 (vmnic4 and vmnic5)

Desktop Network

Virtual machine network vSwitch 2

7 Gb per NIC, for a total of 14 Gb

If your environment needs additional VLANs, management kernel ports, or vMotion NICs, do the following:

Enable NIC 3 (vmnic6 and vmnic7).

Adjust bandwidth allocations appropriately.

Page 25: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

22

22

Figure 9 shows the CNA and Fibre Channel to switch module mapping for Hitachi Compute Blade 500.

Figure 9

The following VLANs separate network traffic in the application cell for VMware View linked clones:

Management-VLAN — Chassis management connections and primary management of the ESXi hypervisors

vMotion-VLAN — Configured for vMotion

Desktop-VLAN — Configured for the desktop network

Storage InfrastructureThe storage infrastructure of the application cell for VMware View linked clones consists of twenty-four 600 GB 10k RPM SAS drives in a single pool created by Hitachi Dynamic Provisioning that consists of six RAID-10 (2D+2D) parity groups.

Dedicate the dynamic provisioning pool for the use of VMware View linked clone desktop virtual machine disks. Since the I/O profile of linked clone disks is approximately 80% random write I/O, separating and dedicating spindles to these virtual disks ensures optimal performance. This keeps distinct I/O workloads separated from those such as replica workloads, which are highly read-intensive.

This supports sustained light user workloads and situations, such as boot and logon storms. If IOPS requirements change, add parity groups to the dynamic provisioning pool to increase IOPS capacity.

Page 26: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

23

23

Figure 10 illustrates the storage configuration for scaling out four application cells for VMware View linked clones.

Figure 10

The 520HB1 server blades in this reference architecture use dual-port 8 Gb/sec Fibre Channel mezzanine cards with redundant connections to the Brocade 6510 enterprise fabric switches.

The environment uses single initiator to multi target zoning for each port on the 520HB1 server blades. Per best practice, the SAN environment was configured in a dual fabric topology for redundancy and high availability. This results in four paths available to each ESXi host, providing the following:

Resiliency to failure

Redundant paths to the storage subsystem

Set the multipathing policy for each target to round robin in the ESXi configuration. This results in optimal load distribution during an all paths available situation.

Page 27: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

24

24

Table 8 shows the zone configuration for an application cell for VMware View linked clones for a 2,000 user architecture.

Table 8. Zoning Configuration of Four Application Cells for VMware View Linked Clones

Host Host HBA Number

Zone Name Storage Port

Cell1-01 HBA 1 CB500_04_B0_HBA1_1_HUS_VM_1A_2A CL1-A

CL2-A

HBA 2 CB500_04_B0_HBA2_1_HUS_VM_1B_2B CL1-B

CL2-B

Cell1-02 HBA 1 CB500_04_B1_HBA1_1_HUS_VM_1A_2A CL1-A

CL2-A

HBA 2 CB500_04_B1_HBA2_1_HUS_VM_1B_2B CL1-B

CL2-B

Cell2-01 HBA 1 CB500_04_B2_HBA1_1_HUS_VM_1A_2A CL1-A

CL2-A

HBA 2 CB500_04_B2_HBA2_1_HUS_VM_1B_2B CL1-B

CL2-B

Cell2-02 HBA 1 CB500_04_B3_HBA1_1_HUS_VM_1A_2A CL1-A

CL2-A

HBA 2 CB500_04_B3_HBA2_1_HUS_VM_1B_2B CL1-B

CL2-B

Cell3-01 HBA 1 CB500_04_B4_HBA1_1_HUS_VM_1A_2A CL1-A

CL2-A

HBA 2 CB500_04_B4_HBA2_1_HUS_VM_1B_2B CL1-B

CL2-B

Cell3-02 HBA 1 CB500_04_B5_HBA1_1_HUS_VM_1A_2A CL1-A

CL2-A

HBA 2 CB500_04_B5_HBA2_1_HUS_VM_1B_2B CL1-B

CL2-B

Page 28: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

25

25

The storage target ports for each cell depend on the host Hitachi Compute Blade 500 chassis. See “Expansion Cell for Compute Resources” on page 32 for details.

Resource Cell for VMware View ReplicasThe resource cell for VMware View replicas contains the storage components necessary to host the replica disks for linked clone desktop deployments.

Figure 11 on page 26 illustrates the individual components within the resource cell for VMware View replicas and their location in the 6,000 user footprint.

Cell4-01 HBA 1 CB500_04_B6_HBA1_1_HUS_VM_1A_2A CL1-A

CL2-A

HBA 2 CB500_04_B6_HBA2_1_HUS_VM_1B_2B CL1-B

CL2-B

Cell4-02 HBA 1 CB500_04_B7_HBA1_1_HUS_VM_1A_2A CL1-A

CL2-A

HBA 2 CB500_04_B7_HBA2_1_HUS_VM_1B_2B CL1-B

CL2-B

Table 8. Zoning Configuration of Four Application Cells for VMware View Linked Clones (Continued)

Host Host HBA Number

Zone Name Storage Port

Page 29: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

26

26

Figure 11

Use a Resource Cell for VMware View Replicas in conjunction with the following cells:

Infrastructure cell for compute resources

Infrastructure cell for storage resources

Application cell for VMware View linked clones

The resource cell for VMware View replicas hosts the cloned replica of the desktop gold image for deployment in a VMware View linked clone pool.

Pair this cell with a maximum of two application cells for VMware View linked clones. You can install this cell in either of the following:

The infrastructure cell for storage resources

An expansion cell for storage resources

Page 30: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

27

27

Table 9 shows the components of the resource cell for VMware View replicas.

The resource cell for VMware View replicas is in a dynamic provisioning pool dedicated to VMware View replica disks. Since the I/O profile of replica disks is approximately 99% read I/O, separating and dedicating spindles to these virtual disks ensures optimal read cache utilization on Hitachi Unified Storage VM.

Use this cell in conjunction with up to two application cells for VMware View linked clones. This supports up to 1,000 linked clone desktops running a knowledge user workload. If you add more application cells for VMware View linked clones, you must add additional resource cells for VMware View replicas to the dynamic provisioning pool to increase replica IOPS capacity.

Figure 12 on page 28 shows the storage configuration for scaling out two resource cells for VMware View replicas to support four application cells for VMware View linked clones.

Table 9. Resource Cell for VMware View Replicas Components

Hardware Detail Description Quantity

SFF Disk Drives

600 GB 10k RPM SAS drives

Configured as RAID-10 (2D+2D)

Installed in the disk tray for this cell

4

Hot Spare

Installed in the infrastructure cell for Hitachi Unified Compute Platform Select management disk tray or the expansion cell for storage resources

1

Page 31: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

28

28

Figure 12

Page 32: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

29

29

Optional Solution Cell DefinitionsUsing these cells is optional in your implementation of this reference architecture.

Application Cell for Hitachi Unified Compute Platform Select ManagementThe application cell for Hitachi Unified Compute Platform Select Management contains the compute and storage components for hosting VMware vSphere, VMware View, or other Unified Compute Platform Select management services.

Use this cell if no existing vSphere, View, or Unified Compute Platform Select management resources exist.

Figure 13 shows the individual components within the application cell for Unified Compute Platform Select management.

Figure 13

Use an application cell for Unified Compute Platform Select management in conjunction with the following cells:

Infrastructure cell for compute resources

Infrastructure cell for storage resources

Application cell for VMware View linked clones

Resource cell for VMware View replicas

Page 33: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

30

30

You can add the application cell for Unified Compute Platform Select management to the infrastructure cell for compute resources and the infrastructure cell for storage resources, or as a separate environment outside of this Hitachi Unified Compute Platform Select for VMware View solution.

If installed outside of the Unified Compute Platform Select for VMware View solution, follow the recommended virtual machine requirements to size the compute hardware used to host the infrastructure and management servers.

Compute InfrastructureThe application cell for Hitachi Unified Compute Platform Select Management provides enough capacity to support an emergency high availability event if a single server blade fails. Use a dedicated High Availability and Distributed Resource Scheduler cluster for the application cell for Unified Compute Platform Select Management to ensure virtual machine failover in the event of a hardware failure. This cluster separates resources from desktop and other workloads for optimal hypervisor efficiency and management infrastructure performance.

If a server blade fails, Hitachi Data Systems recommends replacing the failed blade as soon as possible to restore optimal performance.

Table 10 shows the components of the application cell for Unified Compute Platform Select management.

The compute infrastructure of the application cell for Hitachi Unified Compute Platform Select management supports all associated VMware View, Active Directory, DHCP, and VMware vCenter requirements. The Unified Compute Platform Select for VMware View solution these servers. Either deploy them within the application cell for Unified Compute Platform Select management or deploy them in an existing environment.

Table 10. Application Cell for Hitachi Unified Compute Platform Select Management Hardware

Hardware Detail Description Quantity

520HB1 server blade 2 × 6-Core Intel Xeon E5-2640, 2.5 GHz processors

128 GB memory

1 Emulex 2-port 10 GbE onboard CNA card

1 Emulex 2-port 8 Gb Fibre Channel mezzanine card

2

SFF Disk Drives

600 GB 10k RPM SAS drives

Configured as RAID-6 (6D+2P)

Installed in the application cell for Unified Compute Platform Select management disk tray

8

Hot Spare

Installed in the application cell for Unified Compute Platform Select management disk tray or the expansion cell for storage resources

1

Page 34: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

31

31

Storage InfrastructureThe storage infrastructure of the application cell for Hitachi Unified Compute Platform Select management consists of eight 600 GB 10k RPM SAS drives in a single dynamic provisioning pool consisting of a single RAID-6 (6D+2P) parity group. Dedicate the dynamic provisioning pool to use by VMware View, VMware vSphere, and the management virtual machines on Unified Compute Platform Select.

Zone each 520HB1 server blade in the application cell for Unified Compute Platform Select management to Hitachi Unified Storage VM through the Brocade 5460 Fiber Channel switch modules. Use single initiator to multi target zoning for each port on the 520HB1 server blades. This results in four available paths to each ESXi host, providing the following:

Resiliency to failure

Redundant paths to the storage subsystem

Set the multipathing policy for each target to round robin in the ESXi configuration. This results in optimal load distribution during an all paths available situation.

Table 11 shows the zoning configuration used for the Application Cell for Unified Compute Platform Select Management.

The storage target ports for each cell depend on the Hitachi Compute Blade 500 chassis in which they are hosted. See “Expansion Cell for Compute Resources” on page 32 for details.

Table 11. Application Cell for Hitachi Unified Compute Platform Select Management Zone Configuration

Host Host HBA Number

Zone Name Storage Ports

Blade0-ESX 0 HBA 1 CB500_B0_HBA1_1_HUS_VM_1A_2A CL1-A

CL2-A

HBA 2 CB500_B0_HBA1_2_HUS_VM_1B_2B CL1-B

CL2-B

Blade1-ESX 1 HBA 1 CB500_B1_HBA1_1_HUS_VM_1A_2A CL1-A

CL2-A

HBA 2 CB500_B1_HBA1_2_HUS_VM_1B_2B CL1-B

CL2-B

Page 35: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

32

32

Server Configuration Sizing GuidelinesIt is critical to apply proper resource allocation for the management infrastructure virtual machines used for the Hitachi Unified Compute Platform Select for VMware View environment. Even with proper sizing of resources, user experience can suffer if the management infrastructure is resource starved or undersized. If using an already existing environment outside of the Unified Compute Platform Select for VMware View solution, use the virtual machine sizing recommendations in Table 12 for sizing the hosting hardware listed in Table 10 on page 30.

Table 12 lists the virtual machine configurations used for each component of the management infrastructure.

Table 13 shows the software versions used in the application cell for Unified Compute Platform Select management.

Expansion Cell for Compute ResourcesUse the expansion cell for compute resources to scale out the Hitachi Unified Compute Platform Select for VMware View solution beyond the Hitachi Compute Blade 500 chassis included in the infrastructure cell for compute resources.

Table 12. Virtual Machine Sizing Recommendations

Virtual Machine Configuration Quantity

Microsoft Active Directory, DNS, and DHCP vCPU — 1

vRAM — 8GB

1

VMware vCenter vCPU — 2

vRAM — 8GB

1

Microsoft SQL Server 2008 database for the following:

VMware vCenter

VMware View Composer

VMware View Event DB

vCPU — 2

vRAM — 8GB

1

VMware View connection server vCPU — 4

vRAM — 16GB

2

Table 13. Application Cell for Unified Compute Platform Select Management Software Versions

Software Version

VMware View 5.1.1, Build 799444

VMware vCenter 5.0.0, Update 1b

VMware ESXi 5.0.0, Update1, Build 608089

Microsoft Windows Server 2008 R2 Enterprise SP2

Microsoft SQL Server 2008 R2 Standard

Page 36: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

33

33

Figure 14 illustrates the individual components within the expansion cell for compute resources.

Figure 14

Use an expansion cell for compute resources in conjunction with the following cells:

Infrastructure cell for compute resources

Application cell for VMware View linked clones

Use the expansion cell for compute resources when the Hitachi Compute Blade 500 chassis included with the infrastructure cell for compute resources is fully populated. Connect the expansion cell for compute resources to the existing Brocade VDX 6720 and Brocade 6510 switching infrastructures included in the infrastructure cell for compute resources and infrastructure cell for storage resources.

You can add up to two expansion cells for compute resources to an infrastructure cell for compute resources before you must add a new pair of infrastructure cells compute resources and storage resources.

Page 37: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

34

34

Chassis ComponentsThe expansion cell for compute resources uses the same chassis components contained in “Infrastructure Cell for Compute Resources” on page 11.

Networking InfrastructureThe networking for the expansion cell for Hitachi Compute Blade 500 uses the same networking configurations as the chassis found in “Infrastructure Cell for Compute Resources” on page 11.

Storage InfrastructureUse four of the open storage target ports on Hitachi Unified Storage VM in the infrastructure cell for storage resources. Follow the same storage configuration described in “Infrastructure Cell for Compute Resources” on page 11 to use the newly provisioned storage target ports in the zoning configuration.

Figure 15 shows the storage target ports of a fully scaled out solution.

Figure 15

Expansion Cell for Storage ResourcesUse the expansion cell for storage resources to scale out the Hitachi Unified Compute Platform Select for VMware View solution beyond the Hitachi Unified Storage VM disk trays included in the infrastructure cell for storage resources and the application cells for VMware View linked clones.

Page 38: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

35

35

Figure 16 shows the individual components within the expansion cell for storage resources.

Figure 16

Use an expansion cell for storage resources in conjunction with the following cells:

Infrastructure cell for storage resources

Resource cell for VMware View Replicas

Use the expansion cell for storage resources when the disk expansion tray included with the infrastructure cell for storage resources is fully populated with hot spare disks and resource cell for VMware View replicas disks.

Scale Out Using Solution CellsUse Hitachi Dynamic Provisioning to scale out solution cells in a VDI environment. Doing this ensures that IOPS or throughput needs can be met during periods of heavy utilization. This also allows all desktops in the solution to take advantage of the additional IOPS and throughput available.

To scale out using solutions cells, add the necessary pre-validated cells to the solution. The following examples show scalability up to 6,000 knowledge-based users in a maximum density environment.

Page 39: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

36

36

Scale Out to 2,000 Knowledge-Based UsersTable 14 lists the cells and quantities necessary to deploy the 2,000 user maximum density architecture.

Figure 17 on page 37 shows a fully populated 2,000 user configuration taken from the 6,000 user footprint.

Table 14. Cells Necessary for 2,000 User Maximum Density Architecture

Cell Type Quantity

Infrastructure cell for compute resources 1

Infrastructure cell for storage resources 1

Application cell for VMware View linked clones 4

Resource cell for VMware View replicas 2

Page 40: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

37

37

Figure 17

Page 41: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

38

38

Scale Out to 4,000 Knowledge-Based UsersTable 15 lists the cells and quantities necessary to deploy the 4,000 user maximum density architecture.

Figure 18 on page 39 shows a fully populated 4,000 user configuration taken from the 6,000 user footprint.

Table 15. Cells Necessary for 4,000 User Maximum Density Architecture

Cell Type Quantity

Infrastructure cell for compute resources 1

Infrastructure cell for storage resources 1

Application cell for VMware View linked clones 8

Resource cell for VMware View replicas 4

Expansion cell for compute resources 1

Expansion cell for storage resources 1

Page 42: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

39

39

Figure 18

Scale Out to 6,000 Knowledge-Based UsersTable 16 lists the cells and quantities necessary to deploy the 6,000 user maximum density architecture.

Table 16. Cells Necessary for 6,000 User Maximum Density Architecture

Cell Type Quantity

Infrastructure cell for compute resources 1

Infrastructure cell for storage resources 1

Application cell for VMware View linked clones 12

Resource cell for VMware View replicas 6

Expansion cell for compute resources 2

Expansion cell for storage resources 1

Page 43: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

40

40

Figure 19 shows a fully populated 6,000 user configuration with empty optional application and resource cells.

Figure 19

Page 44: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

41

41

Engineering ValidationThis is the test methodology used to validate this reference architecture and the results of the validation testing.

This reference architecture tested the core components of the Hitachi Unified Compute Platform Select for VMware View solution to determine maximum loads per application cell that the solution could support while still maintaining an acceptable end-user experience.

The tested components where validated to support up to 500 linked clone desktops per application cell running a knowledge user workload. The actual number of desktops in a deployed environment will vary, depending on workload and high availability requirements.

Figure 20 illustrates the Hitachi Unified Compute Platform Select cells tested in a 2,000-user reference architecture.

Figure 20

Page 45: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

42

42

Test MethodologyTesting used a single Hitachi Compute Blade 500 chassis to test a 2,000 seat linked clone configuration, including:

One infrastructure cell for compute resources

One infrastructure cell for storage resources

Four application cells for VMware View linked clones

Two resource cells for VMware View replicas

Steady State TestingVMware View Planner generated the workload for the lab validation testing. Testing was done with a knowledge user workload that generated between four and seven IOPS per desktop during steady state operation. VMware View Accelerator was enabled for all testing.

Table 17 shows the workload profile configuration options used on the View Planner appliance.

A run profile was created in View Planner to execute the workload profile described in Table 17. Table 18 shows the run profile configuration options used on the View Planner appliance.

Table 17. View Planner Workload Profile Configuration

View Planner Workload Profile Option Value

Applications Selected Word, Internet Explorer, Adobe Reader, Excel Sort, Powerpoint Presentation, Archive-7zip, Firefox, Outlook, Multimedia Application, Web Album

Multimedia Application Speed Slow

Iterations 5

Think Time 20

Use Host Timing Enabled

Randomize Execution Enabled

Table 18. View Planner Run Profile Configuration

View Planner Run Profile Option Value

Number of virtual machines 2000

Ramp up time 5

Test type Local

Page 46: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

43

43

VMware View Accelerator and Boot Storm TestingTo determine the reduction in I/O seen at the Hitachi Unified Storage VM with VMware View Accelerator, 2,000 user boot storms were performed with VMware View Accelerator enabled and disabled. Results were timed to determine how many minutes the linked clone desktops took to become available as ready for use in VMware View Administrator.

Test Results — Steady StateThese are the test results for the environment operating in a steady state condition.

Compute InfrastructureMultiple performance metrics were collected from the ESXi hypervisors during the test. Figure 21 on page 43 through Figure 24 on page 46 show the performance data for the 2,000 user test run. Eight 520HB1 server blades provide sufficient hardware resources for the hypervisors to support 2,000 View linked clone desktops.

Hypervisor CPU PerformanceFigure 21 shows the physical CPU metrics collected on the ESXi hypervisors while running the 2,000 user steady state workload.

Figure 21

Page 47: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

44

44

There are three peaks in this chart:

61 minutes

136 minutes

211 minutes

These represent the middle three iterations (scored iterations) of the VMware View Planner test run.

With the percent utilization time peaking at 98%, the blades run at maximum CPU capacity.

Hypervisor Memory PerformanceThere were two 520HB1 server blades, each containing 256 GB of RAM. Each 500-user desktop pool was split between two server blades, allowing commitment of 1024 MB to each virtual machine (250 virtual machines per server blade × 1024 MB per virtual machine is about 262 GB per server blade, close to the 256 GB of RAM on a single server blade).

Figure 22 on page 44 illustrates the benefits of transparent page sharing in a VMware VDI environment, sharing over 35% of the granted 256 GB of virtual machine RAM.

Figure 22

Page 48: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

45

45

Over 60 GB of physical memory remains free, allowing for flexibility of workload types and dedicated memory for VMware View Accelerator.

Swap metrics were not graphed as swap read per second and swap write per second were recorded at zero for the entirety of the test.

The overall used memory throughout the 2,000 user test was relatively low. The 520HB1 server blade has adequate memory headroom for more varied workloads where transparent page sharing may not be as effective.

Hypervisor Storage PerformanceFigure 23 has the storage latency statistics, as seen from the linked clone virtual machines for the linked clone and replica datastores.

While latency statistics are higher on the replica datastore, this configuration ran at a 12:1 linked clone to replica spindle ratio. The end user experience still is well within acceptable ranges.

Figure 23

Page 49: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

46

46

Guest operating system metrics were collected for each virtual desktop during the 2,000 user test. Figure 24 illustrates a representative virtual machine, showing the VMDK IOPS statistics during steady state operation.

The IOPS peaked at 16 during steady state.

The IOPS averaged between 4 and 7 during steady state.

Figure 24

Storage InfrastructureMultiple performance metrics were collected from the Hitachi Unified Storage VM storage subsystem during the tests. To understand the I/O profile of the typical knowledge user workload, metrics were analyzed from the Hitachi Unified Storage VM controllers to ensure the following:

Physical disks were not saturated

Processor cores and cache performed well

Dynamic provisioning pools performed well

Physical disk performance was acceptable during the 2,000 user test.

Page 50: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

47

47

Figure 25 has the following:

Busy statistics for representative parity groups from the linked clone and replica dynamic provisioning pools

Write pending rate statistics from the Hitachi Unified Storage VM storage controllers

Figure 25

These metrics are acceptable, showing that there is still room for growth to support bursts of heavier workloads, if necessary. For example, these metrics show the following:

Linked clone drives have up to 55% headroom during steady state operations.

Replica drives have up to 90% headroom during steady state operations. This is due to a combination of VDI workloads mainly consisting of the following:

Write input-output

View Accelerator off-loading IOPS to ESXi host memory

Read cache utilization on the Hitachi Unified Storage VM controllers

Write pending rate does not rise above 30%, indicating efficiency in drive and controller cache operations.

Page 51: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

48

48

Figure 26 illustrates the processor and cache performance metrics.

Figure 26

Processor core utilization and cache utilization on Hitachi Unified Storage VM were monitored throughout the entirety of the test.

The cache on Hitachi Unified Storage VM was well utilized, between 97% and 99% during steady state operation.

Average processor core utilization did not rise above 30% during steady state operation.

This data indicates the Hitachi Unified Storage VM controllers are adequate for 2,000 VMware View linked clones, with plenty of headroom for future growth.

Figure 27 on page 49 shows the IOPS performance for the dynamic provisioning pool dedicated to the VMware View linked clone datastores.

The read IOPS peaked at approximately 1,900 during steady state.

The write IOPS peaked at approximately 7,900 during steady state.

These data points are well within acceptable ranges.

Page 52: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

49

49

Figure 27

Figure 28 on page 50 shows the IOPS performance for the dynamic provisioning pool dedicated to the VMware View replica datastores.

The read IOPS peaked at approximately 325 during steady state.

There were zero write IOPS during steady state due to the dynamic provisioning pool being dedicated to VMware View replica disks.

These data points are well within acceptable ranges.

Page 53: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

50

50

Figure 28

Figure 29 shows the operational latency of the dynamic provisioning pools dedicated to the linked clone and replica datastores.

Figure 29

Page 54: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

51

51

The metrics show the following:

Latency for the linked clone dynamic provisioning pool peaked at 1.1 milliseconds.

Latency for the replica dynamic provisioning pool peaked at 1.5 milliseconds.

These data points are well within acceptable ranges.

Figure 30 shows the percentage of read and write I/O operations that were random versus sequential on the linked clone dedicated datastores.

Figure 30

These measurements were taken at the backend storage level to show the I/O profile of the workload as seen by the Hitachi Unified Storage VM controllers.

Application ExperienceThe VMware View Planner tool reported the time required for various application operations to complete during the test.

All Group A operations completed in less than 1 second.

These performance metrics are extremely close to physical desktop performance. This proves that this reference architecture provides adequate user experience for 2,000 knowledge users.

Figure 31 on page 52 has the application experience metrics as reported by View Planner.

Page 55: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

52

52

Figure 31

Page 56: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

53

53

Test Results — VMware View Accelerator and Boot StormsVMware View Accelerator in VMware View 5.1 off-loads common read I/O to a host memory cache on the ESXi server. This reduces I/O seen at the back end storage subsystem, leading to an improvement in read intensive operations, such as boot storms and anti-virus storms.

With VMware View Accelerator disabled and using vCenter to directly power on the VDI environment, it took approximately 28 minutes until the desktops were flagged as available in VMware View Administrator.

With VMware View Accelerator enabled and using vCenter to directly power on the VDI environment, it took approximately 16 minutes until the desktops were flagged as available in VMware View Administrator.

Boot times for 2,000 users were reduced by approximately 40% with VMware View Accelerator enabled.

Figure 32 and Figure 33 on page 54 have the IOPS seen on the linked clone and replica dynamic provisioning pools during boot storms with VMware View Accelerator disabled.

Read IOPS on the linked clone pool peaked at approximately 27,000.

Write IOPS on the linked clone pool peaked at approximately 12,000.

Read IOPS on the replica pool peaked at approximately 58,000.

Write IOPS on the replica pool were between 1 and 2 throughout the entirety of the boot storm (a non-factor).

All 2,000 desktops were ready for logon 28 minutes after the power-on event.

Page 57: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

54

54

Figure 32

Figure 33

Page 58: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

55

55

This data proves the underlying storage supports an immediate power-on of having 2,000 desktops ready for use in less than thirty minutes, even with VMware View Accelerator disabled.

Figure 34 and Figure 35 on page 56 illustrate the IOPS seen on the linked clone and replica Hitachi Dynamic Provisioning pools during boot storms with VMware View Accelerator enabled.

Read IOPS on the linked clone pool peaked at approximately 26,000.

Write IOPS on the linked clone pool peaked at approximately 12,000.

Read IOPS on the replica pool peaked at approximately 42,000.

Write IOPS on the replica pool were between 1 and 2 throughout the entirety of the boot storm (a non-factor).

All 2,000 desktops were ready for logon 16 minutes after the power-on event.

Figure 34

Page 59: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

56

56

Figure 35

This data proves that the underlying storage supports an immediate power-on of the 2,000 desktops with them ready for use forty percent faster by enabling VMware View Accelerator.

Figure 36 on page 57 shows the difference in total read I/O on the replica dynamic provisioning pool with VMware View Accelerator enabled and disabled. All read IOPS for the replica were aggregated for the thirty minutes of data collected and compared to determine the reduction in read I/O.

There is a reduction in read I/O on the replica pool of 220,000 read IOPS during the thirty minutes analyzed with VMware View Accelerator enabled versus the thirty minutes analyzed with VMware View Accelerator disabled.

Page 60: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

57

57

Figure 36

This data proves that VMware View Accelerator can reduce read I/O by up to 35-45% during periods of heavy read activity, such as boot storms and anti-virus storms.

Page 61: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

58

58

ConclusionThis reference architecture guide shows how to design a Hitachi Unified Compute Platform Select for VMware View solution using Hitachi Compute Blade 500 and Hitachi Unified Storage VM. The cell design validated in the Hitachi Data systems laboratory enables a build as you go model with performance-proven sets of hardware resources.

Using this cell approach, you can scale designs to support from 500 up to 6,000 knowledge-based workload users. Create a right-sized design that allows purchasing flexibility to meet changing business or project needs.

When designing your implementation of this environment, understand the I/O workload of a desktop in your existing environment to properly design the virtual desktop architecture. This can reduce costs and increase ROI by allowing you to implement the smallest environment possible.

Dedicate low-cost, commodity spindles to replica volumes that are the basis of the linked clone desktops to increase cache utilization on Hitachi Unified Storage VM. Your environment has a better end-user experience, specifically when using desktop pools based upon the same replica virtual machine for similar workloads or users.

Enable VMware View Accelerator for desktop pools to offload commonly read blocks to host memory and reduce read I/O seen at the Hitachi Unified Storage VM. This reduces time to completion for read intensive operations in the environment such as boot storms and anti-virus storms.

Use Hitachi Dynamic Provisioning to create pools allows increasing or decreasing I/O requirements dynamically, if necessary. Having the capability to provision additional spindles to an already-provisioned datastore within VMware vSphere allows for non-disruptive upgrades to the underlying storage infrastructure. This provides immediate benefits to the environment with no confusing shuffling of virtual machines, datastores, or LDEVs.

Further use Hitachi Dynamic Provisioning to create dedicated pools for different types of virtual desktop users could drive utilization and ROI even higher.

Page 62: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

For More InformationHitachi Data Systems Global Services offers experienced storage consultants, proven methodologies and a comprehensive services portfolio to assist you in implementing Hitachi products and solutions in your environment. For more information, see the Hitachi Data Systems Global Services website.

Live and recorded product demonstrations are available for many Hitachi products. To schedule a live demonstration, contact a sales representative. To view a recorded demonstration, see the Hitachi Data Systems Corporate Resources website. Click the Product Demos tab for a list of available recorded demonstrations.

Hitachi Data Systems Academy provides best-in-class training on Hitachi products, technology, solutions and certifications. Hitachi Data Systems Academy delivers on-demand web-based training (WBT), classroom-based instructor-led training (ILT) and virtual instructor-led training (vILT) courses. For more information, see the Hitachi Data Systems Services Education website.

For more information about Hitachi products and services, contact your sales representative or channel partner or visit the Hitachi Data Systems website.

Page 63: Deploy Hitachi Unified Compute Platform Select for … · Deploy Hitachi Unified Compute Platform Select for ... use the paper number in the subject and ... This Hitachi Unified Compute

Corporate Headquarters2845 Lafayette Street, Santa Clara, California 95050-2627 USAwww.HDS.com

Regional Contact InformationAmericas: +1 408 970 1000 or [email protected], Middle East and Africa: +44 (0) 1753 618000 or [email protected] Asia-Pacific: +852 3189 7900 or [email protected]

© Hitachi Data Systems Corporation 2013. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. “Innovate with Information” is a trademark or registered trademark of Hitachi Data Systems Corporation. Microsoft, SQL Server, and Windows are trademarks or registered trademarks of Microsoft Corporation. All other trademarks, service marks, and company names are properties of their respective owners.

Notice: This document is for informational purposes only, and does not set forth any warranty, expressed or implied, concerning any equipment or service offered or to be offered by Hitachi Data Systems Corporation.

AS-189-01, January 2013