cisco desktop virtualization solution for emc vspex with ......using citrix provisioning server 7.1...

298
Cisco Desktop Virtualization Solution for EMC VSPEX with Citrix XenDesktop 7.5 for 1000 Seats July 2014 Building Architectures to Solve Business Problems

Upload: others

Post on 13-Mar-2020

29 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Cisco Desktop Virtualization Solution for EMC VSPEX with Citrix XenDesktop 7.5 for 1000 Seats

July 2014

Building Architectures to Solve Business Problems

Page 2: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled
Page 3: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

About Cisco Validated Design (CVD) Program

The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable, and more predictable customer deployments. For more information visit www.cisco.com/go/designzone.

ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS (COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.

THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.

CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco logo, DCE, and

Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn and Cisco Store are service marks; and

Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Ci sco,

the Cisco Certified Internetwork Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo,

Cisco Unity, Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare , GigaDrive,

HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace,

MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect,

ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx,

and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain ot her countries.

All other trademarks mentioned in this document or website are the property of their respective owners. The use of the word pa rtner does

not imply a partnership relationship between Cisco and any other company. (0809R)

© 2014 Cisco Systems, Inc. All rights reserved

Page 4: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Contents About the Authors ......................................................................................................................................................... 9

Acknowledgements ................................................................................................................................................... 9

Overview ..................................................................................................................................................................... 10

Solution Component Benefits ................................................................................................................................. 10

Benefits of Cisco Unified Computing System .................................................................................................... 10

Benefits of Cisco Nexus Physical Switching ...................................................................................................... 11

Cisco Nexus 5548UP Unified Port Layer 2 Switches ......................................................................................... 11

Cisco Nexus 1000V Distributed Virtual Switch ................................................................................................. 11

Cisco Virtual Machine Fabric Extender (VM-FEX) ........................................................................................... 12

Benefits of EMC VSPEX Proven Infrastructure ................................................................................................. 12

EMC VSPEX End User Computing with the Next-Generation VNX Series .......................................................... 13

Flash-Optimized Hybrid Array ........................................................................................................................... 13

VNX Intel MCx Code Path Optimization ........................................................................................................... 13

Benefits of VMware vSphere ESXi 5.5 .............................................................................................................. 15

Benefits of Citrix XenDesktop 7.5 ...................................................................................................................... 15

Audience ................................................................................................................................................................. 16

Summary of Main Findings ......................................................................................................................................... 16

Architecture ................................................................................................................................................................. 17

Hardware Deployed................................................................................................................................................. 17

Logical Architecture ........................................................................................................................................... 20

Software Revisions ............................................................................................................................................. 21

Configuration Guidelines .................................................................................................................................... 21

VMware Clusters ................................................................................................................................................ 22

Infrastructure Components .......................................................................................................................................... 23

Cisco Unified Computing System (UCS) ................................................................................................................ 23

Cisco Unified Computing System Components ................................................................................................. 23

Citrix XenDesktop 7.5 ............................................................................................................................................. 27

Enhancements in XenDesktop 7.5 ...................................................................................................................... 27

High-Definition User Experience (HDX) Technology ....................................................................................... 29

Citrix XenDesktop 7.5 Desktop and Application Services ................................................................................. 29

Citrix Provisioning Services 7.1 ......................................................................................................................... 30

EMC VNX Series .................................................................................................................................................... 31

EMC VNX5400 Used in Testing ........................................................................................................................ 31

Modular Virtual Desktop Infrastructure Technical Overview ................................................................................. 33

Modular Architecture .......................................................................................................................................... 33

Cisco Data Center Infrastructure for Desktop Virtualization .................................................................................. 34

Page 5: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Simplified ........................................................................................................................................................... 35

Secure ................................................................................................................................................................. 35

Scalable ............................................................................................................................................................... 35

Savings and Success ........................................................................................................................................... 35

Cisco Services ..................................................................................................................................................... 36

Cisco Networking Infrastructure ......................................................................................................................... 36

Features and Benefits .......................................................................................................................................... 37

Architecture and Design of XenDesktop 7.5 on Cisco Unified Computing System and EMC VNX Storage Design

Fundamentals .......................................................................................................................................................... 37

Understanding Applications and Data ................................................................................................................ 38

Project Planning and Solution Sizing Sample Questions .................................................................................... 38

Hypervisor Selection ........................................................................................................................................... 39

Desktop Virtualization Design Fundamentals ......................................................................................................... 39

Citrix Design Fundamentals ............................................................................................................................... 39

Citrix Provisioning Services ............................................................................................................................... 40

Example XenDesktop Deployments ................................................................................................................... 42

Designing a XenDesktop Environment for a Mixed Workload .......................................................................... 44

Citrix Unified Design Fundamentals .................................................................................................................. 45

Storage Architecture Design ................................................................................................................................... 46

Solution Validation ...................................................................................................................................................... 46

Configuration Topology for a Scalable XenDesktop 7.5 Mixed Workload Desktop Virtualization Solution ........ 47

Cisco Unified Computing System Configuration .................................................................................................... 48

Base Cisco UCS System Configuration .............................................................................................................. 48

QoS and CoS in Cisco Unified Computing System ............................................................................................ 80

System Class Configuration ................................................................................................................................ 80

Cisco UCS System Class Configuration ............................................................................................................. 81

The Steps to Enable QOS on the Cisco Unified Computing System .................................................................. 81

LAN Configuration ................................................................................................................................................. 82

Cisco UCS and EMC VNX Ethernet Connectivity ............................................................................................. 82

Cisco Nexus 1000V Configuration in L3 Mode ................................................................................................. 83

Configuring Cisco UCS VM-FEX ...................................................................................................................... 98

SAN Configuration ............................................................................................................................................... 109

Boot from SAN Benefits ................................................................................................................................... 109

Configuring Boot from SAN Overview ............................................................................................................ 110

SAN Configuration on Cisco Nexus 5548UP ................................................................................................... 110

Configuring Boot from iSCSI SAN on EMC VNX5400 .................................................................................. 113

iSCSI SAN Configuration on Cisco UCS Manager .......................................................................................... 117

EMC VNX5400 Storage Configuration ................................................................................................................ 117

Page 6: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Example EMC Volume Configuration for PVS Write Cache ........................................................................... 118

EMC Storage Configuration for PVS vDisks ................................................................................................... 119

EMC Storage Configuration for VMWare ESXi 5.5 Infrastructure and VDA Clusters ................................... 119

Example EMC Boot LUN Configuration ......................................................................................................... 119

EMC FAST Cache in Practice .......................................................................................................................... 120

EMC Additional Configuration Information .................................................................................................... 122

NFS active threads per Data Mover .................................................................................................................. 122

Installing and Configuring ESXi 5.5 ..................................................................................................................... 123

Log in to Cisco UCS 6200 Fabric Interconnect ................................................................................................ 123

Set Up VMware ESXi Installation .................................................................................................................... 124

Install ESXi ....................................................................................................................................................... 124

Set Up Management Networking for ESXi Hosts ............................................................................................ 124

Download VMware vSphere Client and vSphere Remote CLI ........................................................................ 125

Log in to VMware ESXi Hosts by Using VMware vSphere Client .................................................................. 125

Download Updated Cisco VIC enic and fnic Drivers ....................................................................................... 125

Download EMC PowerPath/VE Driver and VAAI plug-in for File ................................................................. 126

Load Updated Cisco VIC enic and fnic Drivers and EMC Bundles ................................................................. 126

Set Up VMkernel Ports and Virtual Switch ...................................................................................................... 127

Mount Required Datastores .............................................................................................................................. 131

Configure NTP on ESXi Hosts ......................................................................................................................... 132

Move VM Swap File Location ......................................................................................................................... 132

Install and Configure vCenter and Clusters ........................................................................................................... 132

Build Microsoft SQL Server VM...................................................................................................................... 133

Install Microsoft SQL Server 2012 for vCenter ................................................................................................ 134

Build and Set Up VMware vCenter VM ........................................................................................................... 140

Install VMware vCenter Server ........................................................................................................................ 143

Set Up ESXi 5.5 Cluster Configuration ............................................................................................................ 147

Installing and Configuring Citrix Licensing and Provisioning Components ......................................................... 149

Installing Citrix License Server ........................................................................................................................ 149

Installing Provisioning Services ....................................................................................................................... 153

Installation of Additional PVS Servers ............................................................................................................. 171

Installing and Configuring XenDesktop 7.5 Components ..................................................................................... 177

Installing the XenDesktop Delivery Controllers ............................................................................................... 177

XenDesktop Controller Configuration .............................................................................................................. 182

Additional XenDesktop Controller Configuration ............................................................................................ 187

Adding Host Connections and Resources with Citrix Studio ........................................................................... 190

Installing and Configuring StoreFront .............................................................................................................. 192

Desktop Delivery Golden Image Creation and Resource Provisioning ..................................................................... 201

Page 7: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Overview of Desktop Delivery .............................................................................................................................. 201

Overview of PVS vDisk Image Management ................................................................................................... 202

Overview – Golden Image Creation ................................................................................................................. 202

Write-cache drive sizing and placement ........................................................................................................... 203

Preparing the Master Targets ................................................................................................................................ 203

Installing the PVS Target Device Software ...................................................................................................... 204

Installing XenDesktop Virtual Desktop Agents ................................................................................................ 208

Installing Applications on the Master Targets .................................................................................................. 212

Creating vDisks ..................................................................................................................................................... 214

Creating Desktops with the PVS XenDesktop Setup Wizard ............................................................................... 219

Creating Delivery Groups ..................................................................................................................................... 227

Citrix XenDesktop Policies and Profile Management ........................................................................................... 230

Configuring Citrix XenDesktop Policies .......................................................................................................... 230

Configuring User Profile Management ............................................................................................................. 231

Test Setup and Configurations ................................................................................................................................... 232

Cisco UCS Test Configuration for Single Blade Scalability ................................................................................. 233

Cisco UCS Configuration for Two Chassis – Eight Mixed Workload Blade Test 1000 Users ............................. 235

Testing Methodology and Success Criteria ........................................................................................................... 236

Load Generation ............................................................................................................................................... 236

User Workload Simulation – LoginVSI From Login VSI Inc. ........................................................................ 236

Testing Procedure ............................................................................................................................................. 238

Success Criteria ................................................................................................................................................. 239

Citrix XenDesktop 7.5 Hosted Virtual Desktop and RDS Hosted Shared Desktop Mixed Workload on Cisco UCS

B200 M3 Blades, EMC VNX5400 Storage and VMware ESXi 5.5 Test Results ..................................................... 243

Single-Server Recommended Maximum Workload .............................................................................................. 244

XenDesktop 7.5 Hosted Virtual Desktop Single Server Maximum Recommended Workload ........................ 244

XenDesktop 7.5 RDS Hosted Shared Desktop Single Server Maximum Recommended Workload ................ 247

Full Scale Mixed Workload XenDesktop 7.5 Hosted Virtual and RDS Hosted Shared Desktops ........................ 249

Key EMC VNX5400 Performance Metrics During Scale Testing ................................................................... 253

Performance Result ........................................................................................................................................... 254

Citrix PVS Workload Characteristics .................................................................................................................... 254

Key Infrastructure Server Performance Metrics During Scale Testing ................................................................. 255

Scalability Considerations and Guidelines ................................................................................................................ 268

Cisco UCS System Scalability .............................................................................................................................. 268

Scalability of Citrix XenDesktop 7.5 Configuration ............................................................................................. 268

EMC VNX5400 storage guidelines for Mixed Desktops Virtualization workload ............................................... 269

References ................................................................................................................................................................. 269

Cisco Reference Documents ................................................................................................................................. 269

Page 8: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Citrix Reference Documents ................................................................................................................................. 269

EMC References ................................................................................................................................................... 270

VMware References .............................................................................................................................................. 270

Login VSI .............................................................................................................................................................. 270

Appendix A–Cisco Nexus 5548UP Configurations .................................................................................................. 271

Appendix B–Cisco Nexus 1000V VSM Configuration ............................................................................................. 281

Appendix C–Server Performance Charts for Mixed Workload Scale Test Run ........................................................ 284

Page 9: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

About the Authors Frank Anderson, Senior Solutions Architect, Cisco Systems, Inc.

Frank is Senior Solutions Architect, focusing on building and testing desktop virtualization solutions with partners.

Frank has been involved with creating VDI-based Cisco Validated Designs since 2010 with over 17 years of

experience working with Citrix and Microsoft products holding various roles as an Administrator, Consultant, Sales

Engineer, Testing Engineer, TME, and Solutions Architect.

Mike Brennan, Cisco Unified Computing System Architect, Cisco Systems, Inc.

Mike is a Cisco Unified Computing System architect, focusing on Virtual Desktop Infrastructure solutions with

extensive experience with EMC VNX, VMware ESX/ESXi, XenDesktop and Provisioning Services. He has expert

product knowledge in application and desktop virtualization across all three major hypervisor platforms, both major

desktop brokers, Microsoft Windows Active Directory, User Profile Management, DNS, DHCP and Cisco

networking technologies.

Ka-kit Wong, Solutions Engineer, Strategic Solutions Engineering, EMC

Ka-Kit Wong is a solutions engineer for desktop virtualization in EMC’s Strategic Solutions Engineering group,

where he focuses on developing End User Computing (EUC) validated solutions. He has been at EMC for more than

13 years, and his roles have included systems, performance and solutions testing in various positions. He holds a

Master of Science degree in computer science from Vanderbilt University.

Acknowledgements Hardik Patel, Technical Marketing Engineer, Cisco Systems, Inc.

Page 10: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Cisco Desktop Virtualization Solution for EMC

VSPEX with Citrix XenDesktop 7.5 for 1000

Seats

Overview This document provides a Reference Architecture for a 1000-Seat Virtual Desktop Infrastructure using Citrix

XenDesktop 7.5 built on Cisco UCS B200-M3 blades with an EMC VNX5400 and the VMware vSphere ESXi 5.5

hypervisor platform.

The landscape of desktop virtualization is changing constantly. New, high performance Cisco UCS Blade Servers

and Cisco UCS unified fabric combined as part of the EMC VSPEX Proven Infrastructure with the latest generation

EMC VNX arrays result in a more compact, more powerful, more reliable and more efficient platform.

In addition, the advances in the Citrix XenDesktop 7.5 system, which now incorporates both traditional hosted

virtual Windows 7 or Windows 8 desktops, hosted applications and hosted shared Server 2008 R2 or Server 2012

R2 server desktops (formerly delivered by Citrix XenApp,) provide unparalleled scale and management simplicity

while extending the Citrix HDX FlexCast models to additional mobile devices

This document provides the architecture and design of a virtual desktop infrastructure for 1000 mixed use-case

users. The infrastructure is 100% virtualized on VMware ESXi 5.5 with third-generation Cisco UCS B-Services

B200 M3 blade servers booting via iSCSI from an EMC VNX5400 storage array. The virtual desktops are powered

using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and

pooled hosted virtual Windows 7 desktops (30%) to support the user population. Where applicable, the document

provides best practice recommendations and sizing guidelines for customer deployments of XenDesktop 7.5 on the

Cisco Unified Computing System.

Solution Component Benefits Each of the components of the overall solution materially contributes to the value of functional design contained in

this document.

Benefits of Cisco Unified Computing System Cisco Unified Computing System™ is the first converged data center platform that combines industry-standard,

x86-architecture servers with networking and storage access into a single converged system. The system is entirely

programmable using unified, model-based management to simplify and speed deployment of enterprise-class

applications and services running in bare-metal, virtualized, and cloud computing environments.

Benefits of the Cisco Unified Computing System include:

Architectural flexibility

Cisco B-Series blade servers for infrastructure and virtual workload hosting

Cisco C-Series rack-mount servers for infrastructure and virtual workload Hosting

Cisco 6200 Series second generation fabric interconnects provide unified blade, network and storage

connectivity

Cisco 5108 Blade Chassis provide the perfect environment for multi-server type, multi-purpose workloads

in a single containment

Infrastructure Simplicity

Converged, simplified architecture drives increased IT productivity

Page 11: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Cisco UCS management results in flexible, agile, high performance, self-integrating information

technology with faster ROI

Fabric Extender technology reduces the number of system components to purchase, configure and maintain

Standards-based, high bandwidth, low latency virtualization-aware unified fabric delivers high density,

excellent virtual desktop user-experience

Business Agility

Model-based management means faster deployment of new capacity for rapid and accurate scalability

Scale up to 20 Chassis and up to 160 blades in a single Cisco UCS management domain

Scale to multiple Cisco UCS Domains with Cisco UCS Central within and across data centers globally

Leverage Cisco UCS Management Packs for VMware vCenter 5.1 for integrated management

Benefits of Cisco Nexus Physical Switching The Cisco Nexus product family includes lines of physical unified port layer 2, 10 GB switches, fabric extenders,

and virtual distributed switching technologies. In our study, we utilized Cisco Nexus 5548UP physical switches,

Cisco Nexus 1000V distributed virtual switches and Cisco VM-FEX technology to deliver amazing end user

experience.

Cisco Nexus 5548UP Unified Port Layer 2 Switches The Cisco Nexus 5548UP Switch delivers innovative architectural flexibility, infrastructure simplicity, and business

agility, with support for networking standards. For traditional, virtualized, unified, and high-performance computing

(HPC) environments, it offers a long list of IT and business advantages, including:

Architectural Flexibility

Unified ports that support traditional Ethernet, Fiber Channel (FC), and Fiber Channel over Ethernet

(ISCSI)

Synchronizes system clocks with accuracy of less than one microsecond, based on IEEE 1588

Offers converged Fabric extensibility, based on emerging standard IEEE 802.1BR, with Fabric Extender

(FEX) Technology portfolio, including the Nexus 1000V Virtual Distributed Switch

Infrastructure Simplicity

Common high-density, high-performance, data-center-class, fixed-form-factor platform

Consolidates LAN and storage

Supports any transport over an Ethernet-based fabric, including Layer 2 and Layer 3 traffic

Supports storage traffic, including iSCSI, NAS, FC, RoE, and IBoE

Reduces management points with FEX Technology

Business Agility

Meets diverse data center deployments on one platform

Provides rapid migration and transition for traditional and evolving technologies

Offers performance and scalability to meet growing business needs

Specifications At-a -Glance

A 1 -rack-unit, 1/10 Gigabit Ethernet switch

32 fixed Unified Ports on base chassis and one expansion slot totaling 48 ports

The slot can support any of the three modules: Unified Ports, 1/2/4/8 native Fiber Channel, and Ethernet or

ISCSI

Throughput of up to 960 Gbps.

Cisco Nexus 1000V Distributed Virtual Switch Get highly secure, multitenant services by adding virtualization intelligence to your data center network with the

Cisco Nexus 1000V Switch for VMware vSphere. This switch:

Page 12: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Extends the network edge to the hypervisor and virtual machines

Is built to scale for cloud networks

Forms the foundation of virtual network overlays for the Cisco Open Network Environment and Software

Defined Networking (SDN)

Important differentiators for the Cisco Nexus 1000V for VMware vSphere include:

Extensive virtual network services built on Cisco advanced service insertion and routing technology

Support for vCloud Director and vSphere hypervisor

Feature and management consistency for easy integration with the physical infrastructure

Exceptional policy and control features for comprehensive networking functionality

Policy management and control by the networking team instead of the server virtualization team

(separation of duties)

Use Virtual Networking Services

The Cisco Nexus 1000V Switch optimizes the use of Layer 4 - 7 virtual networking services in virtual machine and

cloud environments through Cisco vPath architecture services.

Cisco vPath 2.0 supports service chaining so you can use multiple virtual network services as part of a single traffic

flow. For example, you can simply specify the network policy, and vPath 2.0 can direct traffic:

First, through the Cisco ASA1000V Cloud Firewall for tenant edge security Then, through the Cisco Virtual Security Gateway for Nexus 1000V Switch for a zoning firewall

In addition, Cisco vPath works on VXLAN to support movement between servers in different Layer 2 domains.

Together, these features promote highly secure policy, application, and service delivery in the cloud.

Cisco Virtual Machine Fabric Extender (VM-FEX) Cisco Virtual Machine Fabric Extender (VM-FEX) collapses virtual and physical networking into a single

infrastructure. Data center administrators can now provision, configure, manage, monitor, and diagnose virtual

machine network traffic and bare metal network traffic within a unified infrastructure.

The VM-FEX software extends Cisco fabric extender technology to the virtual machine with the following

capabilities:

Each virtual machine includes a dedicated interface on the parent switch

All virtual machine traffic is sent directly to the dedicated interface on the switch

The software-based switch in the hypervisor is eliminated

Benefits of EMC VSPEX Proven Infrastructure VSPEX Proven Infrasture accelerates deployment of provate cloud. Built with best-of-breed virtualization, server,

network, storage and backup, VSPEX enables faster deployment, more simplicity, greater choice, higher efficiency,

and lower risk. Validation by EMC ensures predictable performance and enables customers to select product that

leverages their existing IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX

provides a virtual infrastructure for customers looking to gain simplicity that is characteristic of truly converged

infrastructures while at the same time gaining more choice in individual stack components.

As part of the EMC VSPEX End User Computing solution, the EMC VNX flash-optimized unified storage platform

delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use

solution. Ideal for mixed workloads in physical or virtual environments, VNX combines powerful and flexible

hardware with advanced efficiency, management, and protection software to meet the demanding needs of today’s

virtualized application environments.

VNX storage includes the following components:

Host adapter ports (for block)—Provide host connectivity through fabric into the array.

Data Movers (for file)—Front-end appliances that provide file services to hosts (optional if providing

CIFS/SMB or NFS services).

Page 13: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Storage processors (SPs)—The compute component of the storage array. SPs handle all aspects of data

moving into, out of, and between arrays.

Disk drives—Disk spindles and solid state drives (SSDs) that contain the host/application data and their

enclosures.

The term Data Mover refers to a VNX hardware component, which has a CPU, memory, and input/output (I/O) ports. It enables the CIFS (SMB) and NFS protocols on the VNX array.

EMC VSPEX End User Computing with the Next-Generation

VNX Series Next-generation VNX includes many features and enhancements designed and built upon the first generation’s

success. These features and enhancements include:

More capacity with multicore optimization with multicore cache, multicore RAID, and multicore FAST

Cache (MCx™)

Greater efficiency with a flash-optimized hybrid array

Better protection by increasing application availability with active/active

Easier administration and deployment with the new Unisphere® Management Suite

VSPEX is built with next-generation VNX to deliver even greater efficiency, performance, and scale than

ever before.

Flash-Optimized Hybrid Array VNX is a flash-optimized hybrid array that provides automated tiering to deliver the best performance to your

critical data, while intelligently moving less frequently accessed data to lower-cost disks.

In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the

overall IOPS. Flash-optimized VNX takes full advantage of the low latency of flash to deliver cost-saving

optimization and high performance scalability. EMC Fully Automated Storage Tiering Suite (FAST Cache and

FAST VP) tiers both block and file data across heterogeneous drives and boosts the most active data to the flash

drives, ensuring that customers never have to make concessions for cost or performance.

Data generally is accessed most frequently at the time it is created; therefore, new data is first stored on flash drives

to provide the best performance. As the data ages and becomes less active over time, FAST VP tiers the data from

high-performance to high-capacity drives automatically, based on customer-defined policies. This functionality has

been enhanced with four times better granularity and with new FAST VP solid-state disks (SSDs) based on

enterprise multilevel cell (eMLC) technology to lower the cost per gigabyte.

FAST Cache uses flash drives as an expanded cache layer for the array to dynamically absorb unpredicted spikes in

system workloads. Frequently accessed data is copied to the FAST Cache in 64 KB increments. Subsequent reads

and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data

to flash drives. This dramatically improves the response times for the active data and reduces data hot spots that can

occur within the LUN.

All VSPEX use cases benefit from the increased efficiency provided by the FAST Suite. Furthermore, VNX

provides out-of-band, block-based deduplication that can dramatically lower the costs of the flash tier.

VNX Intel MCx Code Path Optimization The advent of flash technology has been a catalyst in making significant changes in the requirements of midrange

storage systems. EMC redesigned the midrange storage platform to efficiently optimize multicore CPUs to provide

the highest performing storage system at the lowest cost in the market.

MCx distributes all VNX data services across all cores (up to 32), as shown in Figure 1. Figure 1: The VNX series

with MCx has dramatically improved the file performance for transactional applications like databases or virtual

machines over network-attached storage (NAS).

Page 14: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 1: Next-generation VNX with multicore optimization

Multicore Cache

The cache is the most valuable asset in the storage subsystem; its efficient use is the key to the overall efficiency of

the platform in handling variable and changing workloads. The cache engine has been modularized to take

advantage of all the cores available in the system.

Multicore RAID

Another important improvement to the MCx design is how it handles I/O to the permanent back-end storage—hard

disk drives (HDDs) and SSDs. The modularization of the back-end data management processing, which enables

MCx to seamlessly scale across all processors, greatly increases the performance of the VNX system.

Performance Enhancements

VNX storage, enabled with the MCx architecture, is optimized for FLASH 1st and provides unprecedented overall

performance; it optimizes transaction performance (cost per IOPS), bandwidth performance (cost per GB/s) with

low latency, and capacity efficiency (cost per GB).

VNX provides the following performance improvements:

Up to four times more file transactions when compared with dual controller arrays

Increased file performance for transactional applications (for example, Microsoft Exchange on VMware

over NFS) by up to three times, with a 60 percent better response time

Up to four times more Oracle and Microsoft SQL Server OLTP transactions

Up to six times more virtual machines

Active/Active Array Storage Processors

The new VNX architecture provides active/active array storage processors, as shown in Figure 2: , which eliminate

application timeouts during path failover because both paths are actively serving I/O.

Page 15: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 2: Active/active processors increase performance, resiliency, and efficiency

Load balancing is also improved, providing up to double the performance for applications. Active/active for block is

ideal for applications that require the highest levels of availability and performance, but do not require tiering or

efficiency services like compression, deduplication, or snapshot.

Note: The active/active processors are available only for RAID LUNs, not for pool LUNs.

Benefits of VMware vSphere ESXi 5.5 VMware vSphere® 5.5 is the latest release of the flagship virtualization platform from VMware. VMware vSphere,

known in many circles as "ESXi", for the name of the underlying hypervisor architecture, is a bare-metal hypervisor

that installs directly on top of your physical server and partitions it into multiple virtual machines. Each virtual

machine shares the same physical resources as the other virtual machines and they can all run at the same time.

Unlike other hypervisors, all management functionality of vSphere is possible through remote management tools.

There is no underlying operating system, reducing the install footprint to less than 150MB.

Here are some key features included with vSphere 5.5:

Improved Security

Extensive Logging and Auditing

Enhanced vMotion

New Virtual Hardware

Active Directory Integration

Centralized Management

Stateless Firewall

Centralized Management of Host Image and Configuration via Auto Deploy.

For more information on the vSphere ESXi hypervisor, go to:

http://www.vmware.com/products/esxi-and-esx/overview.html

Benefits of Citrix XenDesktop 7.5 Enterprise IT organizations are tasked with the challenge of provisioning Microsoft Windows apps and desktops

while managing cost, centralizing control, and enforcing corporate security policy. Deploying Windows apps to

users in any location, regardless of the device type and available network bandwidth, enables a mobile workforce

that can improve productivity. With Citrix XenDesktop™ 7.5, IT can effectively control app and desktop

provisioning while securing data assets and lowering capital and operating expenses.

The XenDesktop™ 7.5 release offers these benefits:

Comprehensive virtual desktop delivery for any use case. The XenDesktop 7.5 release incorporates the

full power of XenApp, delivering full desktops or just applications to users. Administrators can deploy both

Page 16: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

XenApp published applications and desktops (to maximize IT control at low cost) or personalized VDI

desktops (with simplified image management) from the same management console. Citrix XenDesktop 7.5

leverages common policies and cohesive tools to govern both infrastructure resources and user access.

Simplified support and choice of BYO (Bring Your Own) devices. XenDesktop 7.5 brings thousands of

corporate Microsoft Windows-based applications to mobile devices with a native-touch experience and

optimized performance. HDX technologies create a “high definition” user experience, even for graphics-

intensive design and engineering applications.

Lower cost and complexity of application and desktop management. XenDesktop 7.5 helps IT

organizations take advantage of agile and cost-effective cloud offerings, allowing the virtualized

infrastructure to flex and meet seasonal demands or the need for sudden capacity changes. IT organizations

can deploy XenDesktop application and desktop workloads to private or public clouds, including Amazon

AWS, Citrix Cloud Platform, and (in the near future) Microsoft Azure.

Protection of sensitive information through centralization. XenDesktop decreases the risk of corporate

data loss, enabling access while securing intellectual property and centralizing applications since assets

reside in the datacenter.

Audience This document describes the architecture and deployment procedures of an infrastructure comprised of Cisco, EMC,

and VMware hypervisor and Citrix desktop virtualization products. The intended audience of this document

includes, but is not limited to, sales engineers, field consultants, professional services, IT managers, partner

engineering, and customers who want to deploy the solution described in this document.

Summary of Main Findings The combination of technologies from Cisco Systems, Inc., Citrix Systems, Inc., EMC, and VMware Inc. produced

a highly efficient, robust and affordable desktop virtualization solution for a hosted virtual desktop and hosted

shared desktop mixed deployment supporting different use cases. Key components of the solution included:

This solution is Cisco’s Desktop Virtualization Converged Design with VSPEX providing our customers

with a turnkey physical and virtual infrastructure specifically designed to support 1000 desktop users in a

highly available proven design. This architecture is well suited for large departmental and enterprise

deployments of virtual desktop infrastructure.

More power, same size. Cisco UCS B200 M3 half-width blade with dual 10-core 2.8 GHz Intel Xeon Ivy

Bridge (E5-2680v2) processors and 384GB of memory for XenDesktop hosted virtual desktop hosts and

256GB of memory for XenDesktop hosted shared desktop hosts supports ~25% more virtual desktop

workloads than the previously released Sandy Bridge processors on the same hardware. The Intel Xeon E5-

2680 v2 10-core processors used in this study provided a balance between increased per-blade capacity and

cost.

Fault-tolerance with high availability built into the design. The 1000-user design is based on using two

Unified Computing System chassis with eight B200 M3 blades for virtualized desktop workloads and two

B200 M3 blades for virtualized infrastructure workloads. The design provides N+1 Server fault tolerance

for hosted virtual desktops, hosted shared desktops and infrastructure services.

Stress-tested to the limits during aggressive boot scenario. The 1000-user mixed hosted virtual desktop and

hosted shared desktop environment booted and registered with the XenDesktop 7.5 Delivery Controllers in

under 15 minutes, providing our customers with an extremely fast, reliable cold-start desktop virtualization

system.

Stress-tested to the limits during simulated login storms. All 1000 simulated users logged in and started

running workloads up to steady state in 30-minutes without overwhelming the processors, exhausting

memory or exhausting the storage subsystems, providing customers with a desktop virtualization system

that can easily handle the most demanding login and startup storms.

Ultra-condensed computing for the datacenter. The rack space required to support the 1000-user system is

less than a single rack, 34 rack units, conserving valuable data center floor space.

Page 17: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Pure Virtualization: This CVD presents a validated design that is 100% virtualized on VMware ESXi 5.5.

All of the virtual desktops, user data, profiles, and supporting infrastructure components, including Active

Directory, Provisioning Servers, SQL Servers, XenDesktop Delivery Controllers, and XenDesktop RDS

(XenApp) servers were hosted as virtual machines. This provides customers with complete flexibility for

maintenance and capacity additions because the entire system runs on the VSPEX converged infrastructure

with stateless Cisco UCS Blade servers, and EMC unified storage.

Cisco maintains industry leadership with the new Cisco UCS Manager 2.2(1d) software that simplifies

scaling, guarantees consistency, and eases maintenance. Cisco’s ongoing development efforts with Cisco

UCS Manager, Cisco UCS Central, and Cisco UCS Director insure that customer environments are

consistent locally, across UCS Domains and across the globe, our software suite offers increasingly

simplified operational and deployment management and it continues to widen the span of control for

customer organizations’ subject matter experts in compute, storage and network.

Our 10G unified fabric story gets additional validation on second generation 6200 Series Fabric

Interconnects as Cisco runs more challenging workload testing, while maintaining unsurpassed user

response times.

EMC VNX and the FAST suite provide industry-leading storage solutions that efficiently handle the most

demanding IO bursts (e.g. login storms), profile management, and user data management, provide VM

backup and restores, deliver simple and flexible business continuance, and help reduce storage cost per

desktop.

EMC VNX provides comprehensive storage architecture for hosting all user data components (VMs,

profiles, user data, vDisks and PXE boot images) on the same storage array.

EMC VNX system enables seamlessly add, upgrade or remove storage infrastructure to meet the needs of

the virtual desktops.

EMC Virtual Storage Integrator (VSI) plugin for VMware has deep integration with VMware vSphere

provides easy button automation for key storage tasks like datastore provisioning, storage resize, data

deduplication, etc. directly from within vCenter server.

EMC PowerPath/VE combines multiple path I/O capabilities, automatic load balancing, and path failover

functions into one integrated package

Latest and greatest virtual desktop and application product. Citrix XenDesktop™ 7.5 follows a new unified

product architecture that supports both hosted-shared desktops and applications (RDS) and complete virtual

desktops (VDI). This new XenDesktop release simplifies tasks associated with large-scale VDI

management. This modular solution supports seamless delivery of Windows apps and desktops as the

number of users increase. In addition, HDX enhancements help to optimize performance and improve the

user experience across a variety of endpoint device types, from workstations to mobile devices including

laptops, tablets, and smartphones.

Optimized to achieve the best possible performance and scale. For hosted shared desktop sessions, the best

performance was achieved when the number of vCPUs assigned to the XenDesktop 7.5 RDS virtual

machines did not exceed the number of hyper-threaded cores available on the server. In other words,

maximum performance is obtained when not overcommitting the CPU resources for the virtual machines

running RDS.

Provisioning desktop machines made easy. Citrix Provisioning Services 7.1 created hosted virtual desktops

as well as hosted shared desktops for this solution using a single method for both, the “PVS XenDesktop

Setup Wizard”.

Architecture

Hardware Deployed The architecture deployed is highly modular. While each customer’s environment might vary in its exact

configuration, once the reference architecture contained in this document is built, it can easily be scaled as

requirements and demands change. This includes scaling both up (adding additional resources within a Cisco UCS

Domain) and out (adding additional Cisco UCS Domains and EMC VNX Storage arrays).

Page 18: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

The 1000-user XenDesktop 7.5 solution includes Cisco networking, Cisco UCS and EMC VNX storage, which fits

into a single data center rack, including the access layer network switches.

This validated design document details the deployment of the 1000-user configuration for a mixed XenDesktop

workload featuring the following software:

Citrix XenDesktop 7.5 Pooled Hosted Virtual Desktops with PVS write cache on NFS storage

Citrix XenDesktop 7.5 Shared Hosted Virtual Desktops with PVS write cache on NFS storage

Citrix Provisioning Server 7.1

Citrix User Profile Manager

Citrix StoreFront 2.1

Cisco Nexus 1000V Distributed Virtual Switch

Cisco Virtual Machine Fabric Extender (VM-FEX)

VMware vSphere ESXi 5.5 Hypervisor

Microsoft Windows Server 2012 R2 and Windows 7 32-bit virtual machine Operating Systems

Microsoft SQL Server 2012 SP1

Page 19: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 3: Workload Architecture

The workload contains the following hardware as shown in Figure 3:

Two Cisco Nexus 5548UP Layer 2 Access Switches

Two Cisco UCS 6248UP Series Fabric Interconnects

Two Cisco UCS 5108 Blade Server Chassis with two 2204XP IO Modules per chassis

Three Cisco UCS B200 M3 Blade servers with Intel E5-2680v2 processors, 384GB RAM, and VIC1240

mezzanine cards for the 300 hosted Windows 7 virtual desktop workloads with N+1 server fault tolerance.

Five Cisco UCS B200 M3 Blade servers with Intel E5-2680v2 processors, 256 GB RAM, and VIC1240

mezzanine cards for the 700 hosted shared Windows Server 2012 server desktop workloads with N+1

server fault tolerance.

Two Cisco UCS B200 M3 Blade servers with Intel E5-2650 processors, 128 GB RAM, and VIC1240

mezzanine cards for the infrastructure virtualized workloads

EMC VNX5400 dual controller storage system, 4 disk shelves, 10GE ports for iSCSI and NFS/CIFS

connectivity respectively.

(Not Shown) One Cisco UCS 5108 Blade Server Chassis with 3 UCS B200 M3 Blade servers with Intel

E5-2650 processors, 128 GB RAM, and VIC1240 mezzanine cards for the Login VSI launcher

infrastructure

The EMC VNX5400 disk shelf configurations are detailed in section EMC VNX Series later in this document.

Page 20: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Logical Architecture The logical architecture of the validated is designed to support 1000 users within two chassis and fourteen blades,

which provides physical redundancy for the chassis and blade servers for each workload.

Figure 4: outlines the logical architecture of the test environment.

Figure 4: Logical Architecture Overview

Table 1. outlines all the servers in the configurations.

Table 1. Infrastructure Architecture

Server Name Location Purpose

vSphere1 Physical – Chassis 1 Windows 2012 Datacenter VMs ESXi 5.5 host

(Infrastructure Guests)

vSphere4,5 Physical – Chassis 1 XenDesktop 7.5 RDS ESXi 5.5 Hosts

vSphere 2,3 Physical – Chassis 1 XenDesktop 7.5 HVD ESXi 5.5 Host

vSphere 9 Physical – Chassis 2 Windows 2012 Datacenter VMs ESXi 5.1 host

(Infrastructure Guests)

vSphere 11,12,13 Physical – Chassis 2 XenDesktop 7.5 RDS ESXi 5.5 Hosts

vSphere 10 Physical – Chassis 2 XenDesktop 7.5 HVD ESXi 5.5 Hosts

AD Virtual – vsphere1 Active Directory Domain Controller

XDC1 Virtual – vsphere1 XenDesktop 7.5 controller

PVS1 Virtual – vsphere1 Provisioning Services 7.1 streaming server

Page 21: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

VCENTER Virtual – vsphere1 vCenter 5.5 Server

StoreFront1 Virtual – vsphere1 StoreFront Services server

SQL Virtual – vsphere1 SQL Server (clustered)

XenVSM_Primary Virtual – vsphere1 Nexus 1000-V VSM HA node

LIC Virtual – vsphere1 XenDesktop 7.5 License server

N1KV-VSM-1 Virtual – vsphere1 Nexus 1000-V VSM HA primary node

AD1 Virtual – vsphere9 Active Directory Domain Controller

XDC2 Virtual – vsphere9 XenDesktop 7.5 controller

PVS2 Virtual – vsphere9 Provisioning Services 7.1 streaming server

StoreFront2 Virtual – vsphere9 StoreFront Services server

SQL2 Virtual – vsphere9 SQL Server (clustered)

N1KV-VSM-1 Virtual – vsphere9 Nexus 1000-V VSM HA backup node

Software Revisions This section includes the software versions of the primary products installed in the environment.

Table 2. Software Revisions

Vendor Product Version

Cisco UCS Component Firmware 2.2(1d)

Cisco UCS Manager 2.2(1d)

Cisco Nexus 1000V for vSphere 4.2.1SV2.2.

Citrix XenDesktop 7.5.0.4531

Citrix Provisioning Services 7.1.0.4022

Citrix StoreFront Services 2.5.0.29

VMware vCenter 5.5.0 Build 1476327

VMware vSphere ESXi 5.5 5.5.0 Build 1331820

EMC VAAI Plugin 1.0-11

EMC Power Path for VMware 5.9 SP1 Build 011

EMC VNX Block Operating System 05.33.000.5.051

EMC VNX File Operating System 8.1.2-51

Configuration Guidelines The 1000 User Citrix XenDesktop 7.5 solution described in this document provides details for configuring a fully

redundant, highly-available configuration. Configuration guidelines are provided that refer to which redundant

component is being configured with each step, whether that be A or B. For example Nexus A and Nexus B identify

the pair of Cisco Nexus switches that are configured. The Cisco UCS Fabric Interconnects are configured similarly.

Page 22: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

This document is intended to allow the reader to configure the Citrix XenDesktop 7.5 customer environment as

stand-alone solution.

VLAN

The VLAN configuration recommended for the environment includes a total of seven VLANs as outlined in the

table below.

Table 3. VLAN Configuration

VLAN Name VLAN ID Use

Default 1 Native VLAN

VM-Network 272 Virtual Machine Network, NFS, CIFS

MGMT-OB 517 Out of Band Management Network

MGMT-IB 516 In Band Management Network

iSCSI-a 275 IP Storage VLAN for Boot

iSCSI-b 276 IP Storage VLAN for Boot

vMOTION 273 vMotion

VMware Clusters We utilized four VMware Clusters in two data centers to support the solution and testing environment:

CVSPEX-DT Citrix VSPEX Desktop Data Center

XenDesktop RDS Clusters (Windows Server 2012 R2 hosted shared desktops)

XenDesktop Hosted Virtual Desktop Cluster (Windows 7 SP1 32-bit pooled virtual desktops)

CVSPEX-INF Citrix VSPEX Infrastructure and Launcher Data Center

Infrastructure Cluster (vCenter, Active Directory, DNS, DHCP, SQL Clusters, XenDesktop Controllers,

Provisioning Servers, and Nexus 1000V Virtual Switch Manager appliances, etc.)

Launcher Cluster (The Login Consultants Login VSI launcher infrastructure was hosted on the same Cisco

UCS Domain sharing switching, but running on separate storage.)

Figure 5: vCenter Data Centers and Clusters Deployed

Page 23: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Infrastructure Components This section describes the infrastructure components used in the solution outlined in this study.

Cisco Unified Computing System (UCS) Cisco Unified Computing System is a set of pre-integrated data center components that comprises blade servers,

adapters, fabric interconnects, and extenders that are integrated under a common embedded management system.

This approach results in far fewer system components and much better manageability, operational efficiencies, and

flexibility than comparable data center platforms.

Cisco Unified Computing System Components Cisco UCS components are shown in Figure 6: .

Figure 6: Cisco Unified Computing System Components

The Cisco UCS is designed from the ground up to be programmable and self integrating. A server’s entire hardware

stack, ranging from server firmware and settings to network profiles, is configured through model-based

management. With Cisco virtual interface cards, even the number and type of I/O interfaces is programmed

dynamically, making every server ready to power any workload at any time.

Page 24: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

With model-based management, administrators manipulate a model of a desired system configuration, associate a

model’s service profile with hardware resources and the system configures itself to match the model. This

automation speeds provisioning and workload migration with accurate and rapid scalability. The result is increased

IT staff productivity, improved compliance, and reduced risk of failures due to inconsistent configurations.

Cisco Fabric Extender technology reduces the number of system components to purchase, configure, manage, and

maintain by condensing three network layers into one. It eliminates both blade server and hypervisor-based switches

by connecting fabric interconnect ports directly to individual blade servers and virtual machines. Virtual networks

are now managed exactly as physical networks are, but with massive scalability. This represents a radical

simplification over traditional systems, reducing capital and operating costs while increasing business agility,

simplifying and speeding deployment, and improving performance.

Fabric Interconnect Cisco UCS Fabric Interconnects create a unified network fabric throughout the Cisco UCS. They provide uniform

access to both networks and storage, eliminating the barriers to deploying a fully virtualized environment based on a

flexible, programmable pool of resources.

Cisco Fabric Interconnects comprise a family of line-rate, low-latency, lossless 10-GE, Cisco Data Center Ethernet,

and FCoE interconnect switches. Based on the same switching technology as the Cisco Nexus 5000 Series, Cisco

UCS 6000 Series Fabric Interconnects provide the additional features and management capabilities that make them

the central nervous system of Cisco UCS.

The Cisco UCS Manager software runs inside the Cisco UCS Fabric Interconnects. The Cisco UCS 6000 Series

Fabric Interconnects expand the UCS networking portfolio and offer higher capacity, higher port density, and lower

power consumption. These interconnects provide the management and communication backbone for the Cisco UCS

B-Series Blades and Cisco UCS Blade Server Chassis.

All chassis and all blades that are attached to the Fabric Interconnects are part of a single, highly available

management domain. By supporting unified fabric, the Cisco UCS 6200 Series provides the flexibility to support

LAN and SAN connectivity for all blades within its domain right at configuration time. Typically deployed in

redundant pairs, the Cisco UCS Fabric Interconnect provides uniform access to both networks and storage,

facilitating a fully virtualized environment.

The Cisco UCS Fabric Interconnect family is currently comprised of the Cisco 6100 Series and Cisco 6200 Series of

Fabric Interconnects.

Cisco UCS 6248UP 48-Port Fabric Interconnect

The Cisco UCS 6248UP 48-Port Fabric Interconnect is a 1 RU, 10-GE, Cisco Data Center Ethernet, FCoE

interconnect providing more than 1Tbps throughput with low latency. It has 32 fixed ports of Fibre Channel, 10-GE,

Cisco Data Center Ethernet, and FCoE SFP+ ports.

One expansion module slot can be up to sixteen additional ports of Fibre Channel, 10-GE, Cisco Data Center

Ethernet, and FCoE SFP+.

Note: Cisco UCS 6248UP 48-Port Fabric Interconnects were used in this study.

Cisco UCS 2200 Series IO Module The Cisco UCS 2100/2200 Series FEX multiplexes and forwards all traffic from blade servers in a chassis to a

parent Cisco UCS Fabric Interconnect over from 10-Gbps unified fabric links. All traffic, even traffic between

blades on the same chassis, or VMs on the same blade, is forwarded to the parent interconnect, where network

profiles are managed efficiently and effectively by the Fabric Interconnect. At the core of the Cisco UCS Fabric

Extender are ASIC processors developed by Cisco that multiplex all traffic.

Note: Up to two fabric extenders can be placed in a blade chassis.

Cisco UCS 2104 has eight 10GBASE-KR connections to the blade chassis mid-plane, with one connection per

fabric extender for each of the chassis’ eight half slots. This gives each half-slot blade server access to each of two

10-Gbps unified fabric-based networks via SFP+ sockets for both throughput and redundancy. It has 4 ports

connecting up the fabric interconnect.

Page 25: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Cisco UCS 2208 has thirty-two 10GBASE-KR connections to the blade chassis midplane, with one connection per

fabric extender for each of the chassis’ eight half slots. This gives each half-slot blade server access to each of two

4x10-Gbps unified fabric-based networks via SFP+ sockets for both throughput and redundancy. It has 8 ports

connecting up the fabric interconnect.

Note: Cisco UCS 2208 fabric extenders were utilized in this study.

Cisco UCS Chassis The Cisco UCS 5108 Series Blade Server Chassis is a 6 RU blade chassis that will accept up to eight half-width

Cisco UCS B-Series Blade Servers or up to four full-width Cisco UCS B-Series Blade Servers, or a combination of

the two. The Cisco UCS 5108 Series Blade Server Chassis can accept four redundant power supplies with automatic

load-sharing and failover and two Cisco UCS (either 2100 or 2200 series ) Fabric Extenders. The chassis is managed

by Cisco UCS Chassis Management Controllers, which are mounted in the Cisco UCS Fabric Extenders and work in

conjunction with the Cisco UCS Manager to control the chassis and its components.

A single Cisco UCS managed domain can theoretically scale to up to 40 individual chassis and 320 blade servers. At

this time Cisco supports up to 20 individual chassis and 160 blade servers.

Basing the I/O infrastructure on a 10-Gbps unified network fabric allows the Cisco UCS to have a streamlined

chassis with a simple yet comprehensive set of I/O options. The result is a chassis that has only five basic

components:

The physical chassis with passive midplane and active environmental monitoring circuitry

Four power supply bays with power entry in the rear, and hot-swappable power supply units accessible

from the front panel

Eight hot-swappable fan trays, each with two fans

Two fabric extender slots accessible from the back panel

Eight blade server slots accessible from the front panel

Cisco UCS B200 M3 Blade Server

Cisco UCS B200 M3 is a third generation half-slot, two-socket Blade Server. The Cisco UCS B200 M3 harnesses

the power of the latest Intel® Xeon® processor E5-2600 v2 product family, with up to 768 GB of RAM (using 32GB

DIMMs), two optional SAS/SATA/SSD disk drives, and up to dual 4x 10 Gigabit Ethernet throughput, utilizing our

VIC 1240 LAN on motherboard (LOM) design. The Cisco UCS B200 M3 further extends the capabilities of Cisco

UCS by delivering new levels of manageability, performance, energy efficiency, reliability, security, and I/O

bandwidth for enterprise-class virtualization and other mainstream data center workloads.

In addition, customers who initially purchased Cisco UCS B200 M3 blade servers with Intel E5-2600 series

processors, can field upgrade their blades to the second generation E5-2600 processors, providing increased

processor capacity and providing investment protection

Page 26: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 7: Cisco UCS B200 M3 Server

Cisco UCS VIC1240 Converged Network Adapter

A Cisco® innovation, the Cisco UCS Virtual Interface Card (VIC) 1240 (Figure 1) is a 4-port 10 Gigabit Ethernet,

Fibre Channel over Ethernet (FCoE)-capable modular LAN on motherboard (mLOM) designed exclusively for the

M3 generation of Cisco UCS B-Series Blade Servers. When used in combination with an optional Port Expander,

the Cisco UCS VIC 1240 capabilities can be expanded to eight ports of 10 Gigabit Ethernet.

The Cisco UCS VIC 1240 enables a policy-based, stateless, agile server infrastructure that can present up to 256

PCIe standards-compliant interfaces to the host that can be dynamically configured as either network interface cards

(NICs) or host bus adapters (HBAs). In addition, the Cisco UCS VIC 1240 supports Cisco Data Center Virtual

Machine Fabric Extender (VM-FEX) technology, which extends the Cisco UCS fabric interconnect ports to virtual

machines, simplifying server virtualization deployment.

Figure 8: Cisco UCS VIC 1240 Converged Network Adapter

Page 27: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 9: The Cisco UCS VIC1240 virtual interface cards deployed in the Cisco UCS B-Series B200 M3 blade servers

Citrix XenDesktop 7.5

Enhancements in XenDesktop 7.5 Citrix XenDesktop 7.5 includes significant enhancements to help customers deliver Windows apps and desktops as

mobile services while addressing management complexity and associated costs. Enhancements in this release

include:

Unified product architecture for XenApp and XenDesktop—the FlexCast Management Architecture

(FMA). This release supplies a single set of administrative interfaces to deliver both hosted-shared

applications (RDS) and complete virtual desktops (VDI). Unlike earlier releases that separately provisioned

Citrix XenApp and XenDesktop farms, the XenDesktop 7.5 release allows administrators to deploy a single

infrastructure and use a consistent set of tools to manage mixed application and desktop workloads.

Support for extending deployments to the cloud. This release provides the ability for hybrid cloud

provisioning from Amazon Web Services (AWS) or any Cloud Platform-powered public or private cloud.

Cloud deployments are configured, managed, and monitored through the same administrative consoles as

deployments on traditional on-premises infrastructure.

Enhanced HDX technologies. Since mobile technologies and devices are increasingly prevalent, Citrix has

engineered new and improved HDX technologies to improve the user experience for hosted Windows apps

and desktops.

A new version of StoreFront. The StoreFront 2.5 release provides a single, simple, and consistent

aggregation point for all user services. Administrators can publish apps, desktops, and data services to

StoreFront, from which users can search and subscribe to services.

Remote power control for physical PCs. Remote PC access supports “Wake on LAN” that adds the ability

to power on physical PCs remotely. This allows users to keep PCs powered off when not in use to conserve

energy and reduce costs.

Full AppDNA support. AppDNA provides automated analysis of applications for Windows platforms and

suitability for application virtualization through App-V, XenApp, or XenDesktop. Full AppDNA

functionality is available in some editions.

Additional virtualization resource support. As in this Cisco Validated Design, administrators can configure

connections to VMware vSphere 5.5 hypervisors.

Page 28: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

FlexCast Management Architecture

In Citrix XenDesktop 7.5, FlexCast Management Architecture (FMA) technology is responsible for delivering and

managing hosted-shared RDS apps and complete VDI desktops. By using Citrix Receiver with XenDesktop 7.5,

users have access to a device-native experience on a variety of endpoints, including Windows, Mac, Linux, iOS,

Android, ChromeOS, and Blackberry devices.

Figure 10: Key Components in a typical deployment

Director — Director is a web-based tool that enables IT support and help desk teams to monitor an

environment, troubleshoot issues before they become system-critical, and perform support tasks for end

users.

Receiver — Installed on user devices, Citrix Receiver provides users with quick, secure, self-service access

to documents, applications, and desktops. Receiver provides on-demand access to Windows, Web, and

Software as a Service (SaaS) applications.

StoreFront — StoreFront authenticates users to sites hosting resources and manages stores of desktops and

applications that users can access.

Studio — Studio is the management console to set up the environment, create workloads to host

applications and desktops, and assign applications and desktops to users.

License server —At least one license server is needed to store and manage license files.

Delivery Controller — Installed on servers in the data center, the Delivery Controller consists of services

that communicate with the hypervisor to distribute applications and desktops, authenticate and manage user

access, and broker connections between users and their virtual desktops and applications. The Controller

manages the desktop state, starting and stopping them based on demand and administrative configuration.

Each XenDesktop site has one or more Delivery Controllers.

Hypervisor —Hypervisor technology is used to provide an enterprise-class virtual machine infrastructure

that is the foundation for delivering virtual applications and desktops. Citrix XenDesktop is hypervisor-

agnostic and can be deployed with Citrix XenServer, Microsoft Hyper-V, or VMware vSphere. For this

CVD, the hypervisor used was VMware ESXi 5.5.

Virtual Delivery Agent (VDA) — Installed on server or workstation operating systems, the VDA enables

connections for desktops and apps. For Remote PC Access, install the VDA on the office PC.

Page 29: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Machine Creation Services (MCS) — A collection of services that work together to create virtual servers

and desktops from a master image on demand, optimizing storage utilization and providing a pristine

virtual machine to users every time they log on. Machine Creation Services is fully integrated and

administrated in Citrix Studio.

Windows Server OS machines — These are VMs or physical machines based on Windows Server

operating system used for delivering applications or hosted shared desktops to users.

Desktop OS machines — These are VMs or physical machines based on Windows Desktop operating

system used for delivering personalized desktops to users, or applications from desktop operating systems.

Remote PC Access — User devices that are included on a whitelist, enabling users to access resources on

their office PCs remotely, from any device running Citrix Receiver.

In addition, Citrix Provisioning Services (PVS) technology is responsible for streaming a shared virtual disk (vDisk)

image to the configured Server OS or Desktop OS machines. This streaming capability allows VMs to be

provisioned and re-provisioned in real-time from a single image, eliminating the need to patch individual systems

and conserving storage. All patching is done in one place and then streamed at boot-up. PVS supports image

management for both RDS and VDI-based machines, including support for image snapshots and rollbacks.

High-Definition User Experience (HDX) Technology High-Definition User Experience (HDX) technology in this release is optimized to improve the user experience for

hosted Windows apps on mobile devices. Specific enhancements include:

HDX Mobile™ technology, designed to cope with the variability and packet loss inherent in today’s mobile

networks. HDX technology supports deep compression and redirection, taking advantage of advanced

codec acceleration and an industry-leading H.264-based compression algorithm. The technology enables

dramatic improvements in frame rates while requiring significantly less bandwidth. Real-time multimedia

transcoding improves the delivery of Windows Media content (even in extreme network conditions). HDX

technology offers a rich multimedia experience and optimized performance for voice and video

collaborations.

HDX Touch technology enables mobile navigation capabilities similar to native apps, without rewrites or

porting of existing Windows applications. Optimizations support native menu controls, multi-touch

gestures, and intelligent sensing of text-entry fields, providing a native application look and feel.

HDX 3D Pro uses advanced server-side GPU resources for compression and rendering of the latest

OpenGL and DirectX professional graphics apps. GPU support includes both dedicated user and shared

user workloads. In this release, HDX 3D Pro has been upgraded to support Windows 8.

Citrix XenDesktop 7.5 Desktop and Application Services IT departments strive to deliver application services to a broad range of enterprise users that have varying

performance, personalization, and mobility requirements. Citrix XenDesktop 7.5 allows IT to configure and deliver

any type of virtual desktop or app, hosted or local, and optimize delivery to meet individual user requirements, while

simplifying operations, securing data, and reducing costs.

Page 30: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 11: XenDesktop Single Infrastructure

As illustrated in Figure 11: , the XenDesktop 7.5 release allows administrators to create a single infrastructure that

supports multiple modes of service delivery, including:

Application Virtualization and Hosting (through XenApp). Applications are installed on or streamed to

Windows servers in the data center and remotely displayed to users’ desktops and devices.

Hosted Shared Desktops (RDS). Multiple user sessions share a single, locked-down Windows Server

environment running in the datacenter and accessing a core set of apps. This model of service delivery is

ideal for task workers using low intensity applications, and enables more desktops per host compared to

VDI.

Pooled VDI Desktops. This approach leverages a single desktop OS image to create multiple thinly

provisioned or streamed desktops. Optionally, desktops can be configured with a Personal vDisk to

maintain user application, profile and data differences that are not part of the base image. This approach

replaces the need for dedicated desktops, and is generally deployed to address the desktop needs of

knowledge workers that run more intensive application workloads.

VM Hosted Apps (16 bit, 32 bit, or 64 bit Windows apps). Applications are hosted on virtual desktops

running Windows 7, XP, or Vista and then remotely displayed to users’ physical or virtual desktops and

devices.

This CVD focuses on delivering a mixed workload consisting of hosted shared desktops (HSD based on RDS) and

hosted virtual desktops (VDI).

Citrix Provisioning Services 7.1 A significant advantage to service delivery through RDS and VDI is how these technologies simplify desktop

administration and management. Citrix Provisioning Services (PVS) takes the approach of streaming a single shared

virtual disk (vDisk) image rather than provisioning and distributing multiple OS image copies across multiple virtual

machines. One advantage of this approach is that it constrains the number of disk images that must be managed,

even as the number of desktops grows, ensuring image consistency. At the same time, using a single shared image

(rather than hundreds or thousands of desktop images) significantly reduces the required storage footprint and

dramatically simplifies image management.

Since there is a single master image, patch management is simple and reliable. All patching is done on the master

image, which is then streamed as needed. When an updated image is ready for production, the administrator simply

reboots to deploy the new image. Rolling back to a previous image is done in the same manner. Local hard disk

drives in user systems can be used for runtime data caching or, in some scenarios, removed entirely, lowering power

usage, system failure rates, and security risks.

After installing and configuring PVS components, a vDisk is created from a device’s hard drive by taking a snapshot

of the OS and application image, and then storing that image as a vDisk file on the network. vDisks can exist on a

Page 31: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Provisioning Server, file share, or in larger deployments (as in this CVD), on a storage system with which the

Provisioning Server can communicate (via iSCSI, SAN, NAS, and CIFS). vDisks can be assigned to a single target

device in Private Image Mode, or to multiple target devices in Standard Image Mode.

When a user device boots, the appropriate vDisk is located based on the boot configuration and mounted on the

Provisioning Server. The software on that vDisk is then streamed to the target device and appears like a regular hard

drive to the system. Instead of pulling all the vDisk contents down to the target device (as is done with some

imaging deployment solutions), the data is brought across the network in real time, as needed. This greatly improves

the overall user experience since it minimizes desktop startup time.

This release of PVS extends built-in administrator roles to support delegated administration based on groups that

already exist within the network (Windows or Active Directory Groups). All group members share the same

administrative privileges within a XenDesktop site. An administrator may have multiple roles if they belong to more

than one group.

EMC VNX Series The desktop solutions described in this document are based on the EMC VNX5400 storage array. The EMC VNX

series supports a wide range of business-class features that ideal for the end-user computing environment, including:

EMC Fully Automated Storage Tiering for Virtual Pools (FAST VP™)

EMC FAST™ Cache

File-level data deduplication and compression

Block deduplication

Thin provisioning

Replication

Snapshots and checkpoints

File-level retention

Quota management

EMC VNX5400 Used in Testing The EMC VNX family delivers industry-leading innovation and enterprise capabilities for file, block, and object

storage in a scalable, easy-to-use solution. This next-generation storage platform combines powerful and flexible

hardware with advanced efficiency, management, and protection software to meet the demanding needs of today’s

enterprises.

This solution was validated using a VNX5400 that is versatile in its storage protocol support. It provides iSCSI

storage for hypervisor SAN boot, NFS storage to be used as VMware datastores, CIFS shares for user profiles, home

directories, and vDisk store, and TFTP services to allow PVS-based desktops to PXE boot.

Unisphere Management Suite

EMC Unisphere Management Suite extends Unisphere’s easy-to-use, interface to include VNX Monitoring and

Reporting for validating performance and anticipating capacity requirements. As shown in figure below, the suite

also includes Unisphere Remote for centrally managing up to thousands of VNX and VNXe systems with new

support for XtremSW Cache.

Page 32: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 12: Unisphere Management Suite

EMC Virtual Storage Integrator for VMware vCenter

Virtual Storage Integrator (VSI) is a no-charge VMware vCenter plug-in available to all VMware users with EMC

storage. VSPEX customers can use VSI to simplify management of virtualized storage. VMware administrators can

gain visibility into their VNX storage using the same familiar vCenter interface to which they are accustomed.

With VSI, IT administrators can do more work in less time. VSI offers unmatched access control that enables you to

efficiently manage and delegate storage tasks with confidence. Perform daily management tasks with up 90 percent

fewer clicks and up to 10 times higher productivity.

VMware vStorage APIs for Array Integration

VMware vStorage APIs for Array Integration (VAAI) offloads VMware storage-related functions from the server to

the VNX storage system, enabling more efficient use of server and network resources for increased performance and

consolidation.

VMware vStorage APIs for Storage Awareness

VMware vStorage APIs for Storage Awareness (VASA) is a VMware-defined API that displays storage information

through vCenter. Integration between VASA technology and VNX makes storage management in a virtualized

environment a seamless experience.

PowerPath Virtual Edition

PowerPath is a host-based software that provides automated data path management and load-balancing capabilities

for heterogeneous server, network, and storage deployed in physical and virtual environments. PowerPath uses

multiple I/O data paths to share the workload, and automated load balancing to ensure the efficient use of data paths.

The PowerPath/VE plug-in is installed using the vSphere Update Manager. PowerPath/VE for VMware vSphere

Installation and Administration Guide describes the process to distribute the plug-in and apply the required licenses.

VMware ESXi 5.5

VMware vSphere® 5.5 introduces many new features and enhancements to further extend the core capabilities of

the vSphere platform. These features and capabilities include vSphere ESXi Hypervisor™, VMware vSphere High

Availability (vSphere HA), virtual machines, VMware vCenter Server™, storage, networking and vSphere Big Data

Extensions.

Key Features and Enhancements:

vSphere ESXi Hypervisor Enhancements

– Hot-Pluggable SSD PCI Express (PCIe) Devices

– Support for Reliable Memory Technology ––Enhancements for CPU C-States

Page 33: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Virtual Machine Enhancements

– Virtual Machine Compatibility with VMware ESXi™ 5.5

– Expanded Virtual Graphics Support ––Graphic Acceleration for Linux Guests

VMware vCenter Server Enhancements

– VMware® vCenter™ Single Sign-On

– VMware vSphere Web Client

– VMware vCenter Server Appliance™

– vSphere App HA

– vSphere HA and VMware vSphere Distributed Resource Scheduler™ (vSphere DRS) Virtual

Machine–Virtual Machine Affinity Rules Enhancements ––vSphere Big Data Extensions

vSphere Storage Enhancements

– Support for 62TB VMDK

– MSCS Updates

– vSphere 5.1 Feature Updates

– 16GB E2E support

– PDL AutoRemove

– vSphere Replication Interoperability

– vSphere Replication Multi-Point-in-Time Snapshot Retention ––vSphere Flash Read Cache

vSphere Networking Enhancements

– Link Aggregation Control Protocol Enhancements

– Traffic Filtering

– Quality of Service Tagging

– SR-IOV Enhancements

– Enhanced Host-Level Packet Capture

– 40GB NIC support

Learn more about vSphere 5.5 at the following website:

http://www.vmware.com/products/vsphere/resources.html

Modular Virtual Desktop Infrastructure Technical Overview

Modular Architecture Today’s IT departments are facing a rapidly-evolving workplace environment. The workforce is becoming

increasingly diverse and geographically distributed and includes offshore contractors, distributed call center

operations, knowledge and task workers, partners, consultants, and executives connecting from locations around the

globe at all times.

An increasingly mobile workforce wants to use a growing array of client computing and mobile devices that they

can choose based on personal preference. These trends are increasing pressure on IT to ensure protection of

corporate data and to prevent data leakage or loss through any combination of user, endpoint device, and desktop

access scenarios (Figure 10). These challenges are compounded by desktop refresh cycles to accommodate aging

PCs and bounded local storage and migration to new operating systems, specifically Microsoft Windows 7 and

Windows 8.

Page 34: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 13: The Evolving Workplace Landscape

Some of the key drivers for desktop virtualization are increased data security and reduced TCO through increased

control and reduced management costs.

Cisco Data Center Infrastructure for Desktop Virtualization Cisco focuses on three key elements to deliver the best desktop virtualization data center infrastructure:

simplification, security, and scalability. The software combined with platform modularity provides a simplified,

secure, and scalable desktop virtualization platform (Figure 14: ).

Figure 14: Citrix XenDesktop on Cisco UCS

Page 35: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Simplified Cisco UCS provides a radical new approach to industry standard computing and provides the heart of the data center

infrastructure for desktop virtualization and user mobility. Among the many features and benefits of Cisco UCS are

the drastic reductions in the number of servers needed and number of cables per server and the ability to very

quickly deploy or re-provision servers through Cisco UCS Service Profiles. With fewer servers and cables to

manage and with streamlined server and virtual desktop provisioning, operations are significantly simplified.

Thousands of desktops can be provisioned in minutes with Cisco Service Profiles and Cisco storage partners’

storage-based cloning. This speeds time to productivity for end users, improves business agility, and allows IT

resources to be allocated to other tasks.

IT tasks are further simplified through reduced management complexity, provided by the highly integrated Cisco

UCS Manager, along with fewer servers, interfaces, and cables to manage and maintain. This is possible due to the

industry-leading, highest virtual desktop density per blade of Cisco UCS along with the reduced cabling and port

count due to the unified fabric and unified ports of Cisco UCS and desktop virtualization data center infrastructure.

Simplification also leads to improved and more rapid success of a desktop virtualization implementation. Cisco and

its partners–Citrix (XenDesktop and Provisioning Server) and EMC–have developed integrated, validated

architectures, including available pre-defined, validated infrastructure packages, known as FlexPod.

Secure While virtual desktops are inherently more secure than their physical world predecessors, they introduce new

security considerations. Desktop virtualization significantly increases the need for virtual machine-level awareness

of policy and security, especially given the dynamic and fluid nature of virtual machine mobility across an extended

computing infrastructure. The ease with which new virtual desktops can proliferate magnifies the importance of a

virtualization-aware network and security infrastructure. Cisco UCS and Nexus data center infrastructure for

desktop virtualization provides stronger data center, network, and desktop security with comprehensive security

from the desktop to the hypervisor. Security is enhanced with segmentation of virtual desktops, virtual machine-

aware policies and administration, and network security across the LAN and WAN infrastructure.

Scalable Growth of a desktop virtualization solution is all but inevitable and it is critical to have a solution that can scale

predictably with that growth. The Cisco solution supports more virtual desktops per server and additional servers

scale with near linear performance. Cisco data center infrastructure provides a flexible platform for growth and

improves business agility. Cisco UCS Service Profiles allow for on-demand desktop provisioning, making it easy to

deploy dozens or thousands of additional desktops.

Each additional Cisco UCS server provides near linear performance and utilizes Cisco’s dense memory servers and

unified fabric to avoid desktop virtualization bottlenecks. The high performance, low latency network supports high

volumes of virtual desktop traffic, including high resolution video and communications.

The Cisco UCS and Nexus data center infrastructure is an ideal platform for growth, with transparent scaling of

server, network, and storage resources to support desktop virtualization.

Savings and Success As demonstrated above, the simplified, secure, scalable Cisco data center infrastructure solution for desktop

virtualization will save time and cost. There will be faster payback, better ROI, and lower TCO with the industry’s

highest virtual desktop density per server, meaning there will be fewer servers needed, reducing both capital

expenditures (CapEx) and operating expenditures (OpEx). There will also be much lower network infrastructure

costs, with fewer cables per server and fewer ports required, through the Cisco UCS architecture and unified fabric.

The simplified deployment of Cisco UCS for desktop virtualization speeds up time to productivity and enhances

business agility. IT staff and end users are more productive more quickly and the business can react to new

opportunities by simply deploying virtual desktops whenever and wherever they are needed. The high performance

Cisco systems and network deliver a near-native end-user experience, allowing users to be productive anytime,

anywhere.

Page 36: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Cisco Services Cisco offers assistance for customers in the analysis, planning, implementation, and support phases of the VDI

lifecycle. These services are provided by the Cisco Advanced Services group. Some examples of Cisco services

include:

Cisco VXI Unified Solution Support

Cisco VXI Desktop Virtualization Strategy Service

Cisco VXI Desktop Virtualization Planning and Design Service

The Solution: A Unified, Pre-Tested and Validated Infrastructure

To meet the challenges of designing and implementing a modular desktop infrastructure, Cisco, Citrix, and EMC

have collaborated to create the data center solution for virtual desktops outlined in this document.

Key elements of the solution include:

A shared infrastructure that can scale easily

A shared infrastructure that can accommodate a variety of virtual desktop workloads

Cisco Networking Infrastructure This section describes the Cisco networking infrastructure components used in the configuration.

Cisco Nexus 5548 Switch The Cisco Nexus 5548 Switch is a 1RU, 10 Gigabit Ethernet, FCoE access-layer switch built to provide more than

500 Gbps throughput with very low latency. It has 20 fixed 10 Gigabit Ethernet and FCoE ports that accept modules

and cables meeting the Small Form-Factor Pluggable Plus (SFP+) form factor. One expansion module slot can be

configured to support up to six additional 10 Gigabit Ethernet andFCoE ports, up to eight FC ports, or a combination

of both. The switch has a single serial console port and a single out-of-band 10/100/1000-Mbps Ethernet

management port. Two N+1 redundant, hot-pluggable power supplies and five N+1 redundant, hot-pluggable fan

modules provide highly reliable front-to-back cooling.

Figure 15: Cisco Nexus 5548UP Unified Port Switch

Cisco Nexus 5500 Series Feature Highlights

The switch family's rich feature set makes the series ideal for rack-level, access-layer applications. It protects

investments in data center racks with standards-based Ethernet and FCoE features that allow IT departments to

consolidate networks based on their own requirements and timing.

The combination of high port density, wire-speed performance, and extremely low latency makes the

switch an ideal product to meet the growing demand for 10 Gigabit Ethernet at the rack level. The switch

family has sufficient port density to support single or multiple racks fully populated with blade and rack-

mount servers.

Built for today‘s data centers, the switches are designed just like the servers they support. Ports and power

connections are at the rear, closer to server ports, helping keep cable lengths as short and efficient as

possible. Hot-swappable power and cooling modules can be accessed from the front panel, where status

lights offer an at-a-glance view of switch operation. Front-to-back cooling is consistent with server designs,

Page 37: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

supporting efficient data center hot-aisle and cold-aisle designs. Serviceability is enhanced with all

customer replaceable units accessible from the front panel. The use of SFP+ ports offers increased

flexibility to use a range of interconnect solutions, including copper for short runs and fibre for long runs.

FCoE and IEEE data center bridging features support I/O consolidation, ease management of multiple

traffic flows, and optimize performance. Although implementing SAN consolidation requires only the

lossless fabric provided by the Ethernet pause mechanism, the Cisco Nexus 5500 Series switches provide

additional features that create an even more easily managed, high-performance, unified network fabric.

Features and Benefits This section details the specific features and benefits provided by the Cisco Nexus 5500 Series.

10GB Ethernet, FCoE, and Unified Fabric Features

The Cisco Nexus 5500 Series is first and foremost a family of outstanding access switches for a 10 Gigabit Ethernet

connectivity. Most of the features on the switches are designed for high performance with 10 Gigabit Ethernet. The

Cisco Nexus 5500 Series also supports FCoE on each 10 Gigabit Ethernet port that can be used to implement a

unified data center fabric, consolidating LAN, SAN, and server clustering traffic.

Low Latency

The cut-through switching technology used in the Cisco Nexus 5500 Series ASICs enables the product to offer a low

latency of 3.2 microseconds, which remains constant regardless of the size of the packet being switched. This

latency was measured on fully configured interfaces, with access control lists (ACLs), QoS, and all other data path

features turned on. The low latency on the Cisco Nexus 5500 Series enables application-to-application latency on

the order of 10 microseconds (depending on the NIC). These numbers, together with the congestion management

features described in the next section, make the Cisco Nexus 5500 Series a great choice for latency-sensitive

environments.

Other features include: Nonblocking Line-Rate Performance, Single-Stage Fabric, Congestion Management, Virtual

Output Queues, Lossless Ethernet (Priority Flow Control), Delayed Drop FC over Ethernet, Hardware-Level I/O

Consolidation, and End-Port Virtualization.

Architecture and Design of XenDesktop 7.5 on Cisco Unified

Computing System and EMC VNX Storage Design

Fundamentals There are many reasons to consider a virtual desktop solution such as an ever growing and diverse base of user

devices, complexity in management of traditional desktops, security, and even Bring Your Own Computer (BYOC)

to work programs. The first step in designing a virtual desktop solution is to understand the user community and the

type of tasks that are required to successfully execute their role. The following user classifications are provided:

Knowledge Workers today do not just work in their offices all day – they attend meetings, visit branch

offices, work from home, and even coffee shops. These anywhere workers expect access to all of their

same applications and data wherever they are.

External Contractors are increasingly part of your everyday business. They need access to certain

portions of your applications and data, yet administrators still have little control over the devices they use

and the locations they work from. Consequently, IT is stuck making trade-offs on the cost of providing

these workers a device vs. the security risk of allowing them access from their own devices.

Task Workers perform a set of well-defined tasks. These workers access a small set of applications and

have limited requirements from their PCs. However, since these workers are interacting with your

customers, partners, and employees, they have access to your most critical data.

Mobile Workers need access to their virtual desktop from everywhere, regardless of their ability to

connect to a network. In addition, these workers expect the ability to personalize their PCs, by installing

their own applications and storing their own data, such as photos and music, on these devices.

Shared Workstation users are often found in state-of-the-art university and business computer labs,

conference rooms or training centers. Shared workstation environments have the constant requirement to

Page 38: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

re-provision desktops with the latest operating systems and applications as the needs of the organization

change, tops the list.

After the user classifications have been identified and the business requirements for each user classification have

been defined, it becomes essential to evaluate the types of virtual desktops that are needed based on user

requirements. There are essentially five potential desktops environments for each user:

Traditional PC: A traditional PC is what ―typically‖ constituted a desktop environment: physical device

with a locally installed operating system.

Hosted Shared Desktop: A hosted, server-based desktop is a desktop where the user interacts through a

delivery protocol. With hosted, server-based desktops, a single installed instance of a server operating

system, such as Microsoft Windows Server 2012, is shared by multiple users simultaneously. Each user

receives a desktop "session" and works in an isolated memory space. Changes made by one user could

impact the other users.

Hosted Virtual Desktop: A hosted virtual desktop is a virtual desktop running either on virtualization

layer (ESX) or on bare metal hardware. The user does not work with and sit in front of the desktop, but

instead the user interacts through a delivery protocol.

Published Applications: Published applications run entirely on the XenApp RDS server and the user

interacts through a delivery protocol. With published applications, a single installed instance of an

application, such as Microsoft Office 2012, is shared by multiple users simultaneously. Each user receives

an application "session" and works in an isolated memory space.

Streamed Applications: Streamed desktops and applications run entirely on the user‘s local client device

and are sent from a server on demand. The user interacts with the application or desktop directly but the

resources may only available while they are connected to the network.

Local Virtual Desktop: A local virtual desktop is a desktop running entirely on the user‘s local device and

continues to operate when disconnected from the network. In this case, the user’s local device is used as a

type 1 hypervisor and is synced with the data center when the device is connected to the network.

For the purposes of the validation represented in this document both XenDesktop 7.1 hosted virtual desktops and

hosted shared server desktops were validated. Each of the sections provides some fundamental design decisions for

this environment.

Understanding Applications and Data When the desktop user groups and sub-groups have been identified, the next task is to catalog group application and

data requirements. This can be one of the most time-consuming processes in the VDI planning exercise, but is

essential for the VDI project’s success. If the applications and data are not identified and co-located, performance

will be negatively affected.

The process of analyzing the variety of application and data pairs for an organization will likely be complicated by

the inclusion cloud applications, like SalesForce.com. This application and data analysis is beyond the scope of this

Cisco Validated Design, but should not be omitted from the planning process. There are a variety of third party tools

available to assist organizations with this crucial exercise.

Project Planning and Solution Sizing Sample Questions Now that user groups, their applications and their data requirements are understood, some key project and solution

sizing questions may be considered.

General project questions should be addressed at the outset, including:

Has a VDI pilot plan been created based on the business analysis of the desktop groups, applications and

data?

Is there infrastructure and budget in place to run the pilot program?

Are the required skill sets to execute the VDI project available? Can we hire or contract for them?

Do we have end user experience performance metrics identified for each desktop sub-group?

How will we measure success or failure?

What is the future implication of success or failure?

Page 39: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Provided below is a short, non-exhaustive list of sizing questions that should be addressed for each user sub-group:

What is the desktop OS planned? Windows 7 or Windows 8?

32 bit or 64 bit desktop OS?

How many virtual desktops will be deployed in the pilot? In production? All Windows 7/8?

How much memory per target desktop group desktop?

Are there any rich media, Flash, or graphics-intensive workloads?

What is the end point graphics processing capability?

Will XenDesktop RDS be used for Hosted Shared Server Desktops or exclusively XenDesktop HVD?

Are there XenDesktop hosted applications planned? Are they packaged or installed?

Will Provisioning Server or Machine Creation Services be used for virtual desktop deployment?

What is the hypervisor for the solution?

What is the storage configuration in the existing environment?

Are there sufficient IOPS available for the write-intensive VDI workload?

Will there be storage dedicated and tuned for VDI service?

Is there a voice component to the desktop?

Is anti-virus a part of the image?

Is user profile management (e.g., non-roaming profile based) part of the solution?

What is the fault tolerance, failover, disaster recovery plan?

Are there additional desktop sub-group specific questions?

Hypervisor Selection Citrix XenDesktop is hypervisor-agnostic, so any of the following three hypervisors can be used to host RDS- and

VDI-based desktops:

VMware vSphere: VMware vSphere comprises the management infrastructure or virtual center server

software and the hypervisor software that virtualizes the hardware resources on the servers. It offers

features like Distributed Resource Scheduler, vMotion, high availability, Storage vMotion, VMFS, and a

multipathing storage layer. More information on vSphere can be obtained at the VMware web site:

http://www.vmware.com/products/datacenter-virtualization/vsphere/overview.html.

Hyper-V: Microsoft Windows Server with Hyper-V is available in a Standard, Server Core and free Hyper-

V Server versions. More information on Hyper-V can be obtained at the Microsoft web site:

http://www.microsoft.com/en-us/server-cloud/windows-server/default.aspx.

XenServer: Citrix® XenServer® is a complete, managed server virtualization platform built on the

powerful Xen® hypervisor. Xen technology is widely acknowledged as the fastest and most secure

virtualization software in the industry. XenServer is designed for efficient management of Windows and

Linux virtual servers and delivers cost-effective server consolidation and business continuity. More

information on XenServer can be obtained at the web site:

http://www.citrix.com/products/xenserver/overview.html.

Note: For this CVD, the hypervisor used was VMware ESXi 5.5.

Desktop Virtualization Design Fundamentals An ever growing and diverse base of user devices, complexity in management of traditional desktops, security, and

even Bring Your Own (BYO) device to work programs are prime reasons for moving to a virtual desktop solution.

When evaluating a Desktop Virtualization deployment, consider the following:

Citrix Design Fundamentals Citrix XenDesktop 7.5 integrates Hosted Shared and VDI desktop virtualization technologies into a unified

architecture that enables a scalable, simple, efficient, and manageable solution for delivering Windows applications

and desktops as a service.

Page 40: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Users can select applications from an easy-to-use “store” that is accessible from tablets, smartphones, PCs, Macs,

and thin clients. XenDesktop delivers a native touch-optimized experience with HDX high-definition performance,

even over mobile networks.

Machine Catalogs

Collections of identical Virtual Machines (VMs) or physical computers are managed as a single entity called a

Machine Catalog. In this CVD, VM provisioning relies on Citrix Provisioning Services to make sure that the

machines in the catalog are consistent. In this CVD, machines in the Machine Catalog are configured to run either a

Windows Server OS (for RDS hosted shared desktops) or a Windows Desktop OS (for hosted pooled VDI

desktops).

Delivery Groups

To deliver desktops and applications to users, you create a Machine Catalog and then allocate machines from the

catalog to users by creating Delivery Groups. Delivery Groups provide desktops, applications, or a combination of

desktops and applications to users. Creating a Delivery Group is a flexible way of allocating machines and

applications to users. In a Delivery Group, you can:

Use machines from multiple catalogs

Allocate a user to multiple machines

Allocate multiple users to one machine

As part of the creation process, you specify the following Delivery Group properties:

Users, groups, and applications allocated to Delivery Groups

Desktop settings to match users' needs

Desktop power management options

The graphic below illustrates how users access desktops and applications through machine catalogs and delivery

groups. (Note that only Server OS and Desktop OS Machines are configured in this CVD configuration to support

hosted shared and pooled virtual desktops.)

Citrix Provisioning Services Citrix XenDesktop 7.5 can be deployed with or without Citrix Provisioning Services (PVS). The advantage of using

Citrix PVS is that it allows virtual machines to be provisioned and re-provisioned in real-time from a single shared-

Page 41: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

disk image. In this way administrators can completely eliminate the need to manage and patch individual systems

and reduce the number of disk images that they manage, even as the number of machines continues to grow,

simultaneously providing the efficiencies of a centralized management with the benefits of distributed processing.

The Provisioning Services solution’s infrastructure is based on software-streaming technology. After installing and

configuring Provisioning Services components, a single shared disk image (vDisk) is created from a device’s hard

drive by taking a snapshot of the OS and application image, and then storing that image as a vDisk file on the

network. A device that is used during the vDisk creation process is the Master target device. Devices or virtual

machines that use the created vDisks are called target devices.

When a target device is turned on, it is set to boot from the network and to communicate with a Provisioning Server.

Unlike thin-client technology, processing takes place on the target device (Step 1).

The target device downloads the boot file from a Provisioning Server (Step 2) and boots. Based on the boot

configuration settings, the appropriate vDisk is mounted on the Provisioning Server (Step 3). The vDisk software is

then streamed to the target device as needed, appearing as a regular hard drive to the system.

Instead of immediately pulling all the vDisk contents down to the target device (as with traditional imaging

solutions), the data is brought across the network in real-time as needed. This approach allows a target device to get

a completely new operating system and set of software in the time it takes to reboot. This approach dramatically

decreases the amount of network bandwidth required and making it possible to support a larger number of target

devices on a network without impacting performance

Citrix PVS can create desktops as Pooled or Private:

Pooled Desktop: A pooled virtual desktop uses Citrix PVS to stream a standard desktop image to multiple

desktop instances upon boot.

Private Desktop: A private desktop is a single desktop assigned to one distinct user.

The alternative to Citrix Provisioning Services for pooled desktop deployments is Citrix Machine Creation Services

(MCS), which is integrated with the XenDesktop Studio console.

Locating the PVS Write Cache

When considering a PVS deployment, there are some design decisions that need to be made regarding the write

cache for the target devices that leverage provisioning services. The write cache is a cache of all data that the target

device has written. If data is written to the PVS vDisk in a caching mode, the data is not written back to the base

vDisk. Instead it is written to a write cache file in one of the following locations:

Cache on device hard drive. Write cache exists as a file in NTFS format, located on the target-device’s hard

drive. This option frees up the Provisioning Server since it does not have to process write requests and does

not have the finite limitation of RAM.

Cache on device hard drive persisted. (Experimental Phase) This is the same as “Cache on device hard

drive”, except that the cache persists. At this time, this method is an experimental feature only, and is only

supported for NT6.1 or later (Windows 7 and Windows 2008 R2 and later). This method also requires a

different bootstrap.

Cache in device RAM. Write cache can exist as a temporary file in the target device’s RAM. This provides

the fastest method of disk access since memory access is always faster than disk access.

Page 42: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Cache in device RAM with overflow on hard disk. This method uses VHDX differencing format and is

only available for Windows 7 and Server 2008 R2 and later. When RAM is zero, the target device write

cache is only written to the local disk. When RAM is not zero, the target device write cache is written to

RAM first. When RAM is full, the least recently used block of data is written to the local differencing disk

to accommodate newer data on RAM. The amount of RAM specified is the non-paged kernel memory that

the target device will consume.

Cache on a server. Write cache can exist as a temporary file on a Provisioning Server. In this configuration,

all writes are handled by the Provisioning Server, which can increase disk I/O and network traffic. For

additional security, the Provisioning Server can be configured to encrypt write cache files. Since the write-

cache file persists on the hard drive between reboots, encrypted data provides data protection in the event a

hard drive is stolen.

Cache on server persisted. This cache option allows for the saved changes between reboots. Using this

option, a rebooted target device is able to retrieve changes made from previous sessions that differ from the

read only vDisk image. If a vDisk is set to this method of caching, each target device that accesses the

vDisk automatically has a device-specific, writable disk file created. Any changes made to the vDisk image

are written to that file, which is not automatically deleted upon shutdown.

In this CVD, PVS 7.1 was used to manage Pooled Desktops with cache on device storage for each virtual machine.

This design enables good scalability to many thousands of desktops. Provisioning Server 7.1 was used for Active

Directory machine account creation and management as well as for streaming the shared disk to the hypervisor

hosts.

Example XenDesktop Deployments Two examples of typical XenDesktop deployments are the following:

A distributed components configuration

A multiple site configuration

Since XenDesktop 7.5 is based on a unified architecture, both configurations can deliver a combination of Hosted

Shared Desktops (HSDs, using a Server OS machine) and Hosted Virtual Desktops (HVDs, using a Desktop OS).

Distributed Components Configuration

You can distribute the components of your deployment among a greater number of servers, or provide greater

scalability and failover by increasing the number of controllers in your site. You can install management consoles on

separate computers to manage the deployment remotely. A distributed deployment is necessary for an infrastructure

based on remote access through NetScaler Gateway (formerly called Access Gateway).

The diagram below shows an example of a distributed components configuration. A simplified version of this

configuration is often deployed for an initial proof-of-concept (POC) deployment. The CVD described in this

document deploys Citrix XenDesktop in a configuration that resembles this distributed components configuration

shown.

Page 43: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Multiple site configuration

If you have multiple regional sites, you can use Citrix NetScaler to direct user connections to the most appropriate

site and StoreFront to deliver desktops and applications to users.

In the diagram below depicting multiple sites, a site was created in two data centers. Having two sites globally,

rather than just one, minimizes the amount of unnecessary WAN traffic. Two Cisco blade servers host the required

infrastructure services (AD, DNS, DHCP, Profile, SQL, Citrix XenDesktop management, and web servers).

Page 44: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

You can use StoreFront to aggregate resources from multiple sites to provide users with a single point of access with

NetScaler. A separate Studio console is required to manage each site; sites cannot be managed as a single entity.

You can use Director to support users across sites.

Citrix NetScaler accelerates application performance; load balances servers, increases security, and optimizes the

user experience. In this example, two NetScalers are used to provide a high availability configuration. The

NetScalers are configured for Global Server Load Balancing and positioned in the DMZ to provide a multi-site,

fault-tolerant solution.

Designing a XenDesktop Environment for a Mixed Workload With Citrix XenDesktop 7.5, the method you choose to provide applications or desktops to users depends on the

types of applications and desktops you are hosting and available system resources, as well as the types of users and

user experience you want to provide.

Server OS

machines

You want: Inexpensive server-based delivery to minimize the cost of delivering applications to a

large number of users, while providing a secure, high-definition user experience.

Your users: Perform well-defined tasks and do not require personalization or offline access to

applications. Users may include task workers such as call center operators and retail workers, or

users that share workstations.

Application types: Any application.

Desktop OS

machines

You want: A client-based application delivery solution that is secure, provides centralized

management, and supports a large number of users per host server (or hypervisor), while

providing users with applications that display seamlessly in high-definition.

Your users: Are internal, external contractors, third-party collaborators, and other provisional

team members. Users do not require off-line access to hosted applications.

Application types: Applications that might not work well with other applications or might

interact with the operating system, such as .NET framework. These types of applications are ideal

Page 45: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

for hosting on virtual machines.

Applications running on older operating systems such as Windows XP or Windows Vista, and

older architectures, such as 32-bit or 16-bit. By isolating each application on its own virtual

machine, if one machine fails, it does not impact other users.

Remote PC

Access

You want: Employees with secure remote access to a physical computer without using a VPN.

For example, the user may be accessing their physical desktop PC from home or through a public

WIFI hotspot. Depending upon the location, you may want to restrict the ability to print or copy

and paste outside of the desktop. This method enables BYO device support without migrating

desktop images into the datacenter.

Your users: Employees or contractors that have the option to work from home, but need access

to specific software or data on their corporate desktops to perform their jobs remotely.

Host: The same as Desktop OS machines.

Application types: Applications that are delivered from an office computer and display

seamlessly in high definition on the remote user's device.

For the Cisco Validated Design described in this document, a mix of Hosted Shared Desktops (HSDs) using Server

OS machines and Hosted Virtual Desktops (HVDs) using Desktop OS machines were configured and tested. The

mix consisted of 70% HSDs to 30% HVDs. The following sections discuss design decisions relative to the Citrix

XenDesktop deployment, including the CVD test environment.

Citrix Unified Design Fundamentals Citrix XenDesktop 7.5 integrates Hosted Shared and VDI desktop virtualization technologies into a unified

architecture that enables a scalable, simple, efficient, and manageable solution for delivering Windows applications

and desktops as a service.

Users can select applications from an easy-to-use “store” that is accessible from tablets, smartphones, PCs, Macs,

and thin clients. XenDesktop delivers a native touch-optimized experience with HDX high-definition performance,

even over mobile networks.

Machine Catalogs

Collections of identical Virtual Machines (VMs) or physical computers are managed as a single entity called a

Machine Catalog. In this CVD, VM provisioning relies on Citrix Provisioning Services to make sure that the

machines in the catalog are consistent. In this CVD, machines in the Machine Catalog are configured to run either a

Windows Server OS (for RDS hosted shared desktops) or a Windows Desktop OS (for hosted pooled VDI

desktops).

Delivery Groups

To deliver desktops and applications to users, you create a Machine Catalog and then allocate machines from the

catalog to users by creating Delivery Groups. Delivery Groups provide desktops, applications, or a combination of

desktops and applications to users. Creating a Delivery Group is a flexible way of allocating machines and

applications to users. In a Delivery Group, you can:

Use machines from multiple catalogs

Allocate a user to multiple machines

Allocate multiple users to one machine

As part of the creation process, you specify the following Delivery Group properties:

Users, groups, and applications allocated to Delivery Groups

Desktop settings to match users' needs

Desktop power management options

Page 46: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

The graphic below shows how users access desktops and applications through machine catalogs and delivery

groups. (Note that both Server OS and Desktop OS Machines are configured in this CVD to support a combination

of hosted shared and pooled virtual desktops.)

Storage Architecture Design The EMC VNX™ family is optimized for virtual applications delivering industry-leading innovation and enterprise

capabilities for file, block, and object storage in a scalable, easy-to-use solution. This next-generation storage

platform combines powerful and flexible hardware with advanced efficiency, management, and protection software

to meet the demanding needs of today’s enterprises.

EMC VNX5400 used in this solution provides comprehensive storage architecture for hosting all virtual desktop

components listed below on a unified storage platform.

ESXi OS is stored on an iSCSI LUN from which each vSphere host is booted. The boot from SAN design

allows UCS service profiles to be portable from one blade to another when the blades do not use local

disks.

PVS vDisk is hosted on a VNX CIFS share to provide central management of vDisk by eliminating

duplicated copies of the same vDisk image.

PVS write cache hosted on NFS datastores simplifies VM storage provisioning.

PXE boot image for PVS-based desktop is serviced by the TFTP server hosted on the VNX5400.

User profiles defined by Citrix User Profile Management (UPM) and user home directories both reside on

VNX CIFS shares that can leverage VNX deduplication, compression, and data protection.

Solution Validation This section details the configuration and tuning that was performed on the individual components to produce a

complete, validated solution.

Page 47: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Configuration Topology for a Scalable XenDesktop 7.5 Mixed

Workload Desktop Virtualization Solution

Figure 16: Cisco Solutions for EMC VSPEX XenDesktop 7.5 1000 Seat Architecture Block

Figure 16: captures the architectural diagram for the purpose of this study. The architecture is divided into four

distinct layers:

Cisco UCS Compute Platform

The Virtual Desktop Infrastructure and Virtual Desktops that run on Cisco UCS blade hypervisor hosts

Network Access layer and LAN

Storage Access via iSCSI on EMC VNX5400 deployment

Figure 17: details the physical configuration of the 2000 seat Citrix XenDesktop 7.1 environment.

Page 48: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 17: Detailed Architecture of the EMC VSPEX XenDesktop 7.5 1000 Seat Mixed Workload

Cisco Unified Computing System Configuration This section talks about the UCS configuration that was done as part of the infrastructure build out. The racking,

power and installation of the chassis are described in the install guide (see www.cisco.com/c/en/us/support/servers-

unified-computing/ucs-manager/products-installation-guides-list.html) and it is beyond the scope of this document.

More details on each step can be found in the following documents:

Cisco UCS Manager Configuration Guides – GUI and Command Line Interface (CLI)

Cisco UCS Manager - Configuration Guides - Cisco

Base Cisco UCS System Configuration To configure the Cisco Unified Computing System, perform the following steps:

1

Bring up the Fabric Interconnect (FI) and from a serial console connection set the IP address, gateway, and the

hostname of the primary fabric interconnect. Now bring up the second fabric interconnect after connecting the dual

cables between them. The second fabric interconnect automatically recognizes the primary and ask if you want to be

part of the cluster, answer yes and set the IP address, gateway and the hostname. Once this is done all access to the

Page 49: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

FI can be done remotely. You will also configure the virtual IP address to connect to the FI, you need a total of

three IP address to bring it online. You can also wire up the chassis to the FI, using either 1, 2, 4 or 8 links per IO

Module, depending on your application bandwidth requirement. We connected four links to each module.

2 Now connect using your favorite browser to the Virtual IP and launch the UCS-Manager. The Java based UCSM

will let you do everything that you could do from the CLI. We will highlight the GUI methodology here.

3 First check the firmware on the system and see if it is current. Visit: Download Software for

Cisco UCS Infrastructure and UCS Manager Software to download the most current Cisco UCS Infrastructure and

Cisco UCS Manager software. Use the UCS Manager Equipment tab in the left pane, then the Firmware

Management tab in the right pane and Packages sub-tab to view the packages on the system. Use the Download

Tasks tab to download needed software to the FI. The firmware release used in this paper is 2.2(1d).

If the firmware is not current, follow the installation and upgrade guide to upgrade the UCS Manager firmware. We

will use UCS Policy in Service Profiles later in this document to update all UCS components in the solution.

Note: The Bios and Board Controller version numbers do not track the IO Module, Adapter, nor CIMC

controller version numbers in the packages.

4 Configure and enable the server ports on the FI. These are the ports that will connect the chassis to the FIs.

5 Configure and enable uplink Ethernet ports:

Page 50: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

5a

On the LAN tab in the Navigator pane, configure the required Port Channels and Uplink Interfaces on both Fabric

Interconnects (using the ethernet uplink port channels using the ethernet Network ports configured above).

6 On the Equipment tab, expand the Chassis node in the left pane, the click on each chassis in the left pane, then click

Acknowledge Chassis in the right pane to bring the chassis online and enable blade discovery.

Page 51: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

7 Use the Admin tab in the left pane, to configure logging, users and authentication, key management,

communications, statistics, time zone and NTP services, and Licensing. Configuring your Management IP Pool

(which provides IP based access to the KVM of each UCS Blade Server,) Time zone Management (including NTP

time source(s)) and uploading your license files are critical steps in the process.

8 Create all the pools: MAC pool, UUID pool, IQN suffix pool, iSCSI Initiator IP Address Pool, External

Management IP Address Pool and Server pools

8.1

From the LAN tab in the navigator, under the Pools node, we created a MAC address pool of sufficient size for the

environment. In this project, we created a single pool with two address ranges for expandability.

Page 52: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

8.2 From the LAN tab under the Pools node, we created two iSCSI Initiator address pools to service the fault tolerant

pair iSCSI virtual NICs that will be created later. Each pool has 16 addresses. Pool “a” is on network

172.16.96.1/19 and pool-b is on network 172.16.128.1/19.

8.3

From the LAN under the Pools node, we created an External Management IP address pool for use by the Cisco UCS

KVM connections to the blade servers in the study.

Page 53: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

8.4 The next pool we created is the Server UUID pool. On the Servers tab in the Navigator page under the Pools node

we created a single UUID Pool for the test environment. Each Cisco UCS Blade Server requires a unique UUID to

be assigned by its Service profile.

8.5 We created three Server Pools for use in our Service Profile Templates as selection criteria for automated profile

association. Server Pools were created on the Servers tab in the navigation page under the Pools node. Only the pool

name was created, no servers were added:

Page 54: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Note: We created two iSCSI initiator ip address pools.

8.6

We created one IQN suffix pools for the iSCSI environment. From the SAN tab, SAN node, Pools, root, right-click

the IQN Pools node and click Create IQN Suffix Pool.

Page 55: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

8.7 We created three Server Pool Policy Qualifications to identify the blade server model, its processor and the amount

of RAM onboard for placement into the correct Server Pool using the Service Profile Template. In this case we used

processors to select the Infrastructure blades running E5-2650 processors and memory to select HVD (384GB

RAM) or HSD (256GB RAM.) (We could have used a combination of chassis, slot, server model, or any

combination of those things to make the selection.)

8.8 The next step in automating the server selection process is to create corresponding Server Pool Policies for each

Cisco UCS Blade Server configuration, utilizing the Server Pool and Server Pool Policy Qualifications created

earlier.

Page 56: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

To create the policy, right-click the Server Pool Policy node, select Create Server Pool Policy, provide a name,

description (optional,) select the Target Pool from the dropdown, the Qualification from the dropdown and click

OK. Repeat for each policy to be created.

9 On the LAN tab in the navigator pane, configure the VLANs for the environment:

In this project we utilized six VLANs to accommodate our four traffic types and a separate native VLAN for all

traffic that was not tagged on the network. The Storage iSCSI VLANs provided boot communications and carried

NFS and CIFS storage traffic.

11

On the LAN tab in the navigator pane, under the policies node configure the vNIC templates that will be used in the

Service Profiles. In this project, we utilize four virtual NICs per host, two to each Fabric Interconnect for resiliency.

Two of the four vNIC templates, eth2 and eth3 are utilized exclusively for iSCSI boot and file protocol traffic. QoS

is handled by Cisco Nexus 1000V or by Cisco VM-FEX for the VM-Network, so no QoS policy is set on the

templates. Both in-band and out-of-band management VLANs are trunked to the eth0 and eth1 vNIC templates. The

Default VLAN is not used, but is marked as the only Native VLAN in the environment.

Page 57: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

11b To create vNIC templates for eth0 and eth1 on both fabrics, select the Fabric ID, select all VLANs except iSCSI-A,

iSCSI-B and Default, set the MTU size to 9000, select the MAC Pool, then click OK.

For eth2 and eth3, select the Fabric ID, select VLAN iSCSI-a for eth2 or VLAN iSCSI-b for eth3, set the MTU size

to 9000, select the MAC Pool, select Native, then click OK

Page 58: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

12

12a

We utilized the included Default Cisco UCS B200 M3 BIOS policy for the XenDesktop 7.5 HVD virtual machines.

To support VM-FEX for the XenDesktop RDS/HSD VMs, we created a performance BIOS Policy. The policy will

be applied to all Cisco UCS B200 M3 blade servers hosting XenDesktop 7.5 RDS and Infrastructure.

Prepare the Perf-Cisco BIOS Policy. From the Server tab, Policies, Root node, right-click the BIOS Policies

container and click Create BIOS Policy. Provide the policy name and step through the wizard making the choices

indicated on the screen shots below:

Page 59: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

13

The Advanced Tab Settings

The remaining Advanced tab settings are at platform default or not configured. Similarly, the Boot Options and

Server Management tabs‘settings are at defaults. Many of the settings in this policy are the Cisco UCS B200 M3

BIOS default settings. We created this policy to illustrate the combined effect of the Platform Default and specific

Page 60: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

settings for this use case.

Note: Be sure to Save Changes at the bottom of the page to preserve this setting. Be sure to add this policy

to your blade service profile template.

14 New in UCS Manager since version 2.1(1a) is a way Host Firmware Package polices can be set by package version

across the UCS domain rather than by server model: (Note: You can still create specific packages for different

models or for specific purposes.) In our study, we did create a Host Firmware Package for the UCS B200 M3 blades

which were all assigned the UCS Firmware version 2.2(1d). Right-click the Host Firmware Packages container,

click Create Host Firmware Package, provide a Name, Description (optional) and then click the Advanced

configuration radio button.

Note: The Simple configuration option allows you to create the package based on a particular UCS Blade

and/or Rack Server package that is uploaded on the FIs. We used the included Cisco UCS B200 M3 host

firmware package. The following screen shots illustrate how to create a custom package, selecting only

those packages that apply to the server as configured.

Continue through the CIMC, BIOS, Board Controller and Storage Controller tabs as follows:

Page 61: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled
Page 62: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Note: We did not use legacy nor third party FC Adapters or HBA’s so there was no configuration required

on those tabs.

The result is a customized Host Firmware Package for the Cisco UCS B200 M3 blade servers.

15a

For the RDS template that will leverage Cisco VM-FEX, from the LAN tab, Policies, Root, right-click Dynamic

vNIC Connection Policies and click Create Dynamic vNIC Connection Policy. Provide a Name, Number of

Dynamic vNICs, the Adapter Policy and Protection preference as shown below, then click OK

Page 63: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

15b

In the Servers tab, expand the Policies > root nodes, then select Adapter Policies. Right-click and choose Create

iSCSI Adapter Policy from the context menu.

15c In the Servers tab, expand Policies > root nodes. Select the Boot Policies node. Right-click and choose Create

Boot Policy from the context menu.

Page 64: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

In the Create Boot Policy dialog complete the following:

Expand Local Devices

Select Add CD-ROM

Expand iSCSI vNICs

Select Add iSCSI Boot (iSCSI0) as Primary

Select Add iSCSI Boot (iSCSI1) as Secondary

Adjust boot order so it is CD-ROM, iSCSI0, iSCSI1.

Click Save Changes.

Create a service profile template using the pools, templates, and policies configured above. We created a total of

three Service Profile Templates, one for each workload type, XenDesktop HVD and XenDesktop Hosted Shared

Desktop (RDS) and Infrastructure Hosts (Infra)as follows:

Page 65: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

To create a Service Profile Template, right-click the Service Profile Templates node on the Servers tab and click

Create Service Profile Template. The Create Service Profile template wizard will open.

Follow through each section, utilizing the policies and objects you created earlier, then click Finish.

On the Operational Policies screen, select the appropriate performance BIOS policy you created earlier to

insure maximum LV DIMM performance.

For automatic deployment of service profiles from your template(s), you must associate a server pool that

contains blades with the template.

16a On the Create Service Profile Template wizard, we entered a unique name, selected the type as updating, and

selected the VSPEX-UUIDs Suffix Pool created earlier, then clicked Next.

16b We selected the Expert configuration option on the Networking page and clicked Add in the adapters window:

Page 66: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

16c In the Create vNIC window, we entered a unique Name, checked the Use LAN Connectivity Template checkbox,

selected the vNIC Template from the drop-down, and the Adapter Policy the same way.

Page 67: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

For the HSD template, we specified the VMware Passthru Policy and Dynamic Connection Policy for eth0 and

eth1.

Page 68: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

16d We repeated the process for the remaining vNIC, resulting in the following:

Click Next to continue.

16e On the storage page, click Next.

Page 69: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

16f

On the Zoning page, click Next to continue.

Page 70: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

On the vNIC/vHBA Placement page, click Next to continue:

Page 71: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

16g On the Server Boot Order page, select the iSCSI Boot policy iSCSIBoot, created earlier from the drop-down, then

click Next to proceed:

Page 72: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

16h

Highlight Primary iSCSI vNIC iSCSI0, then click Modify iSCSI vNIC. Select the following values from the drop-

down menus:

Overlay vNIC: eth2 for iSCSI0; eth3 for iSCSI1

iSCSI Adapter Policy: iSCSI for both

VLAN: iSCSI-A for iSCSI0; iSCSI-B for iSCSI1

Click OK

Page 73: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

16i

Repeat the process for the Secondary iSCSI adapter, iSCSI1, using eth3, the iSCSI Adapter Policy, and VLAN

iSCSI-B. Highlight the Primary iSCSI adapter, iSCSI0, then click Set iSCSI Boot Parameters.

Page 74: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

16j

Complete the following fields by selecting the IQN Initiator Name Pool and iSCSI Initiator IP address pool created

earlier.

Page 75: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

16k

Next, select the iSCSI Static Target Interface radio button, then the “+” sign at the right edge of the window below

it and enter the EMC VNX5400 IQN target interface names, iSCSI IPV address and LUN Id as 0. Repeat for the

second target interface as shown above.

Repeat the process for the Secondary iSCSI adapter, using the same Initiator Name Assignment, iscsi-initiator-pool-

b and the appropriate EMC VNX5400 target interface names, iSCSI IPV address and LUN Id as 0.

Page 76: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

16l

We did not create a Maintenance Policy for the project. Click Next to continue:

Page 77: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

16

m

On the Server Assignment page, make the following selections from the drop-downs and click the expand arrow on

the Firmware Management box as shown:

For the other two Service Profile Templates that were created for the project, we choose HostedVirtual or

Page 78: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Infrastructure for Pool Assignments and HVD or Infra for the Server Pool Qualification.

In all three cases, we utilized the Default Host Firmware Management policy for the Cisco UCS B200 M3 blades.

16n

16o

On the Operational Policies page, we expanded the BIOS Configuration drop-down and selected the

VMwarePassthru policy for Hosted Shared desktops that utilize VMFEX created earlier. Click Finish to complete

the Service Profile Template:

For the Hosted Virtual desktop and Infrastructure Service Profile Templates, we used the Default BIOS policy by

choosing <not set> in the BIOS Configuration section of the Operational Policies node of the Wizard.

Repeat the Create Service Profile Template for the two remaining templates.

The result is a Service Profile Templates for each use case in the study and an Infrastructure template as shown

below:

17 Now that we had created the Service Profile Templates for each UCS Blade Server model used in the project, we

used them to create the appropriate number of Service Profiles. To do so, in the Servers tab in the navigation page,

in the Service Profile Templates node, we expanded the root and selected Service Template Compute-Fabric-A,

Page 79: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

then clicked on Create Service Profiles from Template in the right pane, Actions area:

18 We provided the naming prefix, starting number and the number of Service Profiles to create and clicked OK

We created the following number of Service Profiles from the respective Service Profile Templates:

Service Profile

Template

Service Profile Name

Prefix

Starting Number Number of Profiles

from Template

Compute-Fabric-A XenDesktopHVD-0 1 2

Compute-Fabric-B XenDesktopHVD-0 3 1

RDS-Fabric-A XenDesktopRDS-0 1 2

RDS-Fabric-B XenDesktopRDS-0 3 3

VM-Host-Infra-Fabric-

A

VM-Host-Infra-0 1 1

VM-Host-Infra-Fabric-

B

VM-Host-Infra-0 2 1

19 Cisco UCS Manager created the requisite number of profiles and because of the Associated Server Pool and Server

Pool Qualification policy, the Cisco UCS B200 M3 blades in the test environment began automatically associating

with the proper Service Profile.

Page 80: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Note: The LoginVSI profiles were created to support the End User Experience testing manually.

20 We verified that each server had a profile and that it received the correct profile from the Equipment tab

At this point, the Cisco UCS Blade Servers are ready for hypervisor installation.

QoS and CoS in Cisco Unified Computing System Cisco Unified Computing System provides different system class of service to implement quality of service

including:

System classes that specify the global configuration for certain types of traffic across the entire system

QoS policies that assign system classes for individual vNICs

Flow control policies that determine how uplink Ethernet ports handle pause frames.

Applications like the Cisco Unified Computing System and other time sensitive applications have to adhere to a

strict QOS for optimal performance.

System Class Configuration Systems Class is the global operation where entire system interfaces are with defined QoS rules.

By default system has Best Effort Class and FCoE Class.

Best effort is equivalent in MQC terminology as “match any”

– FCoE is special Class define for FCoE traffic. In MQC terminology “match cos 3”

System class allowed with 4 more users define class with following configurable rules.

– CoS to Class Map

– Weight: Bandwidth

– Per class MTU

– Property of Class (Drop v/s no drop)

Max MTU per Class allowed is 9217.

Page 81: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Through Cisco Unified Computing System we can map one CoS value to particular class.

Apart from FcoE class there can be only one more class can be configured as no-drop property.

Weight can be configured based on 0 to 10 numbers. Internally system will calculate the bandwidth based

on following equation (there will be rounding off the number).

(Weight of the given priority * 100)

% b/w shared of given Class = ________________________________

Sum of weights of all priority

Cisco UCS System Class Configuration Cisco UCS defines user class names as follows.

Platinum

Gold

Silver

Bronze

Table 4. Name Table Map between Cisco Unified Computing System and the NXOS

Cisco UCS Names NXOS Names

Best effort Class-default

FCoE Class-fc

Platinum Class-Platinum

Gold Class-Gold

Silver Class-Silver

Bronze Class-Bronze

Table 5. Class to CoS Map by default in Cisco Unified Computing System

Cisco UCS Class Names Cisco UCS Default Class Value

Best effort Match any

Fc 3

Platinum 5

Gold 4

Silver 2

Bronze 1

Table 6. Default Weight in Cisco Unified Computing System

Cisco UCS Class Names Weight

Best effort 5

Fc 5

The Steps to Enable QOS on the Cisco Unified Computing System For this study, we utilized four UCS QoS System Classes to priorities four types of traffic in the infrastructure:

Page 82: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Table 7. QoS Priority to vNIC and VLAN Mapping

Cisco UCS Qos Priority vNIC Assignment VLAN Supported

Platinum eth0, eth1

VM-Network

Gold eth0, eth1 Not Assigned

Silver eth0, eth1 VLAN516 (Management)

Bronze eth0, eth1 vMotion

In this study, all VLANs were trunked to eth0 and eth1 and both use Best Effort QoS. Detailed QoS was handled by

the Cisco Nexus 1000V or Cisco VM-FEX and Nexus 5548 switches, but it is important that the UCS QoS System

Classes match what the switches are using.

Configure Platinum, Gold, Silver and Bronze policies by checking the enabled box. For the Platinum Policy, used

for NFS and CIFS storage, Bronze for vMotion and Best Effort were configured for Jumbo Frames in the MTU

column. Notice the option to set no packet drop policy during this configuration. Click Save Changes at the bottom

right corner prior to leaving this node.

Figure 18: Cisco UCS QoS System Class Configuration

This is a unique value proposition for Cisco Unified Computing System with respect to end-to-end QOS. For

example, we have a VLAN for the NetApp storage, configured Platinum policy with Jumbo frames and get an end-

to-end QOS and performance guarantees from the Blade Servers running the Nexus 1000V virtual distributed

switches or Cisco VM-FEX through the Cisco Nexus 5548UP access layer switches.

LAN Configuration The access layer LAN configuration consists of a pair of Cisco Nexus 5548s (N5Ks,) a family member of our low-

latency, line-rate, 10 Gigabit Ethernet and FCoE switches for our VDI deployment.

Cisco UCS and EMC VNX Ethernet Connectivity Two 10 Gigabit Ethernet uplink ports are configured on each of the Cisco UCS 6248 fabric interconnects, and they

are connected to the Cisco Nexus 5548 pair in a bow tie manner as shown below in a port channel.

The 6248 Fabric Interconnect is in End host mode, as we are doing both iSCSI as well as Ethernet (NAS) data

access as per the recommended best practice of the Cisco Unified Computing System. We built this out for scale and

have provisioned 20 GB per Fabric Interconnect for ethernet and NAS (Figure 19: ).

The EMC VNX5400 is also equipped with two dual-port SLICs which are connected to the pair of N5Ks

downstream. Both paths are active providing failover capability. This allows end-to-end 10G access for file-based

storage traffic. We have implemented jumbo frames on the ports and have priority flow control on, with Platinum

CoS and QoS assigned to the vNICs carrying the storage data access on the Fabric Interconnects.

Page 83: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

The upstream configuration is beyond the scope of this document; there are some good reference document [4] that

talks about best practices of using the Cisco Nexus 5000 and 7000 Series Switches. New with the Cisco Nexus 5500

series is an available Layer 3 module that was not used in these tests and that will not be covered in this document.

Figure 19: Ethernet Network Configuration with Upstream Cisco Nexus 5500 Series from the Cisco Unified Computing

System 6200 Series Fabric Interconnects and EMC VNX5400

Cisco Nexus 1000V Configuration in L3 Mode 1. To download the Nexus1000 V 4.2(1) SV1 (5.2), click the link below:

http://www.cisco.com/cisco/software/release.html?mdfid=282646785&flowid=3090&softwareid=2820881

29&release=4.2(1)SV1(5.2)&relind=AVAILABLE&rellifecycle=&reltype=latest

2. Extract the downloaded N1000V .zip file on the Windows host.

3. To start the N1000V installation, run the command below from the command prompt. (Make sure the

Windows host has the latest Java version installed)

Page 84: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

4. After running the installation command, you will see the “Nexus 1000V Installation Management Center”

5. Type the vCenter IP and the logon credentials.

Page 85: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

6. Select the ESX host on which to install N1KV Virtual Switch Manager.

Page 86: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

7. Select the OVA file from the extracted N1KV location to create the VSM.

8. Select the System Redunancy type as “HA” and type the virtual machine name for the N1KV VSM and

choose the Datastore for the VSM.

9. To configure L3 mode of installation, choose the “L3 : Configure port groups for L3”

Page 87: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

a. Create Port-group as Control and specify the VLAN ID and select the corresponding vSwitch.

b. Select the existing port group “VM Network” for N1K Mgmt and choose mgmt0 with the VLAN ID for the

SVS connection between vCenter and VSM.

c. In the option for L3 mgmt0 interface port-profile enter the vlan that was pre-defined for ESXi mgmt and

accordingly it will create a port-group which will have L3 capability. In this case it is n1kv-L3 port-group

as shown in the screenshot below.

10. To Configure VSM, Type the Switch Name and Enter the admin password for the VSM. Type the IP

address, subnet mask, Gateway, Domain ID (if there are multiple instance of N1KV VSM need to be

install, make sure they each configured with different Domain ID) and select the SVS datacenter Name and

Type the vSwitch0 Native vlan ID. (Make sure the Native VLAN ID specified should match the Native

VLAN ID of Cisco Unified Computing System and the Cisco Nexus 5000)

Page 88: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

11. Review the configuration and Click Next to proceed with the installation.

12. Wait for the Completion of Nexus 1000V VSM installation.

Page 89: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

13. Click Finish to complete the VSM installation.

14. Logon (ssh or telnet) to the N1KV VSM with the IP address and configure VLAN for ESX Mgmt, Control,

N1K Mgmt and also for Storage and vMotion purposes as mentioned below (VLAN ID differs based on

your Network). First, create ip access lists for each QoS policy:

Page 90: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

xenvsm# conf t

Enter the following configuration commands, one per line. End with

CNTL/Z.

ip access-list mark_Bronze

10 permit ip any 172.20.75.0/24

20 permit ip 172.20.75.0/24 any

ip access-list mark_Gold

10 permit ip any 172.20.48.0/20

20 permit ip 172.20.48.0/20 any

ip access-list mark_Platinum

10 permit ip any 172.20.74.0/24

20 permit ip 172.20.74.0/24 any

ip access-list mark_Silver

10 permit ip any 172.20.73.0/24

20 permit ip 172.20.73.0/24 any

15. Create class maps for QoS policy

class-map type qos match-all Gold_Traffic

match access-group name mark_Gold

class-map type qos match-all Bronze_Traffic

match access-group name mark_Bronze

class-map type qos match-all Silver_Traffic

match access-group name mark_Silver

class-map type qos match-all Platinum_Traffic

match access-group name mark_Platinum

16. Create policy maps for QoS and set class of service

– policy-map type qos FlexPod

– class Platinum_Traffic

– set cos 5

– class Gold_Traffic

– set cos 4

– class Silver_Traffic

– set cos 2

– class Bronze_Traffic

– set cos 1

17. Set vlans for QoS

vlan 1, ,517,272-276

vlan 1

name Native-VLAN

Page 91: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

vlan 272

name VM-Network

vlan 517

name IB-MGMT-VLAN

vlan 276

name vMotion-VLAN

18. Create port profile for system uplinks and vethernet port groups.

There are existing port profiles created during the install. Do not modify or delete these port profiles.

port-profile type ethernet system-uplink

vmware port-group

switchport mode trunk

switchport trunk native vlan 6

switchport trunk allowed vlan 517,272-276

system mtu 9000

channel-group auto mode on mac-pinning

no shutdown

system vlan 517,272-276

state enabled

port-profile type vethernet IB-MGMT-VLAN

vmware port-group

switchport mode access

switchport access vlan 517

service-policy type qos input VSPEX

no shutdown

system vlan 517

max-ports 254

state enabled

port-profile type vethernet vMotion-VLAN

vmware port-group

switchport mode access

switchport access vlan 276

service-policy type qos input VSPEX

no shutdown

system vlan 276

state enabled

port-profile type vethernet VM-Network

Page 92: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

vmware port-group

port-binding static auto expand

switchport mode access

switchport access vlan 272

service-policy type qos input VSPEX

no shutdown

system vlan 272

state enabled

port-profile type vethernet n1k-L3

capability l3control

vmware port-group

switchport mode access

switchport access vlan 517

service-policy type qos input VSPEX

no shutdown

system vlan 517

state enabled

19. Set the MTU size to 9000 on the Virtual Ethernet Modules

interface port-channel1

inherit port-profile system-uplink

vem 3

mtu 9000

interface port-channel2

inherit port-profile system-uplink

vem 5

mtu 9000

interface port-channel3

inherit port-profile system-uplink

vem 6

mtu 9000

interface port-channel4

inherit port-profile system-uplink

vem 4

mtu 9000

interface port-channel5

inherit port-profile system-uplink

vem 7

Page 93: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

mtu 9000

interface port-channel6

inherit port-profile system-uplink

vem 8

mtu 9000

interface port-channel7

inherit port-profile system-uplink

vem 9

mtu 9000

interface port-channel8

inherit port-profile system-uplink

vem 10

mtu 9000

interface port-channel9

inherit port-profile system-uplink

vem 12

mtu 9000

20. After creating port profiles, make sure vCenter shows all the port profiles and port groups under the

respective N1KV VSM. Then, Add the ESXi host to the VSM.

– Go to Inventory networking select DVS for N1KV click on tab for hosts.

21. Right-click and select add host to vSphere Distributed Switch.

Page 94: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

This brings up ESXi hosts which are not part of existing configuration.

22. Select the ESX host to add, choose the vNICs to be assigned, click Select an Uplink Port-Group and select

system-uplink for both vmnic0 and vmnic1.

23. After selecting appropriate uplinks click Next.

Page 95: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

24. Network Connectivity tab select Destination port group for vmk0, then click Next

25. On the tab for virtual machine networking, select VMs and assign them to a destination port-group if there

is any. Otherwise click Next to Ready to complete.

Page 96: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

26. Verify the Settings and click Finish to add the ESXi host part of N1KV DVS.

Page 97: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Note: This will invoke VMware update manger (VUM) to automatically push the VEM installation for the

selected ESXi hosts. After successful staging, install and remediation process, now the ESXi host will be

added to N1KV VSM. From the vCenter task manager, quickly check the process of VEM installation.

27. In the absence of Update manager:

– Upload vib file cross_cisco-vem-v162-4.2.1.2.2.1a.0-3.1.1.vib for VEM installation to local or remote

datastore which can be obtained by browsing to the management IP address for N1KV VSM.

28. Login to each ESXi host using ESXi shell or SSH session.

29. Run following command:

Page 98: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

esxcli software vib install –v /vmfs/volumes/ datastore/ cross_cisco-

vem-v162-4.2.1.2.2.1a.0-3.1.1.vib

30. Verify the successful installation of ESXi VEM and the status of ESXi host.

31. Verify putty into N1KV VSM. Run sh module command which will show all the ESXi hosts attached to

that VSM.

xenvsm(config)# sh module

Configuring Cisco UCS VM-FEX 1. Click the download link given below to install latest VEM software installer.

http://software.cisco.com/download/release.html?mdfid=283853163&flowid=25821&softwareid=2838531

58&release=2.0%285b%29&relind=AVAILABLE&rellifecycle=&reltype=latest

2. From the retrieved ISO file unzip it and browse to \ucs-bxxx-drivers.2.2.1d\VMware\VM-

FEX\Cisco\MLOM\ESXi_5.5\ cisco-vem- v161-5.5-1.2.7.1.vib

Page 99: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Select your Network Interface card and ESXi hypervisor version. Current Supported NICs are 1280, M81KR,

MLOM and ESX/ESXi 4.1 U2, ESX/ESXi4.1 U3, ESXi 5.0 U1, ESXi 5.5.

Installing Cisco VEM Software Bundle Using SSH or ESXi Shell cmdline

1. Upload VEM installer vib file to datastore on ESXi host. Preferably on the shared storage

2. Login into ESXi host using ssh or ESXi shell. Run the command shown below:

# esxcli software vib install –v /vmfs/volumes/xxxxx/cisco-vem- v161-

5.5-1.2.7.1.vib

3. To further verify run following command:

vmkload_mod –l |grep pts

You can also install using the VMware Update Manager.

Please refer to the link given below for different method of installation or updragding VEM software on ESX/ESXi

host:

http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/vm_fex/vmware/gui/config_guide/GUI_VMware_VM

-FEX_UCSM_Configuration_Guide_chapter3.html

Configuring VM-FEX on Cisco UCS Manager and Integration with vCenter

1. Add Cisco UCSM extension for VM-FEX on vCenter.

2. Export vCenter Extension.

3. On the Cisco UCS Manager go to VM tab. Click VMware and click Export vCenter Extension. Save the

extension file and click OK.

4. Login to vCenter Client. Select Plug-ins and select Plug-in Manager.

5. Right-click in the empty pane for Plug-in Manager and select New Plug-in.

Page 100: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

6. Click Browse and select saved vCenter Extension file.

7. Click Ignore.

8. Verify that Plug-in registered and shows in Available Plug-ins.

Page 101: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

9. Configure VMware Integration.

10. From the Cisco UCS Manger, go to VM tab, click VMware and on the right side pane click Configure

VMware Integration.

11. Login to vCenter Client, go to networking and create a Folder to use with VM-FEX distributed virtual

switch.

12. Install Plug-in on vCenter Server; which was completed in steps above section x.1.

13. Click Next.

14. Fill out the section for Define VMware Distributed Virtual Switch.

15. Click Enable in the DVS section. Click Next.

Page 102: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

16. Define Port Profile.

17. Select a name for port-profile, QoS Policy, Max Ports, VLANs.

18. Select Datacenter, Folder, Distributed Virtual Switch.

19. Click Next.

Page 103: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

20. Apply Port Profile to Virtual Machines in vCenter Server.

21. Click Finish.

22. Click Ok.

23. Above completed configuration is shown in the screenshot below.

24. Under VMware node vCenter Datacenter Folder VM-FEX switch port-profile.

Page 104: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

25. Select port-profile created and set the Host Network IO performance to High Performance.

26. On vCenter console, go to networking tab. Select Distributed virtual switch for VM-FEX.

27. Click Hosts tab.

28. Right-click in the empty field and select Add host to vSphere Distributed Switch.

29. Check the box next to the ESXi host and vmnic needed to be add on VM-FEX vDS.

Page 105: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

30. Click Next.

31. On the tab for the Virtual machine networking, click the check box.

32. Migrate the destination port-group for all the desired VMs to port-group created for the VM networking.

33. Click Next.

Page 106: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

34. 34. Click Finish.

Page 107: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

35. Select the virtual machines that are part of the VM-FEX configuration.

36. Right-click and select edit settings.

37. Click the Resources tab and select Memory.

38. Check the box to reserve all guest memory.

39. Select the ESXi host that is part of the VM-FEX vDS configuration.

40. Select the appropriate uplink port-profile as per the vmnic assignment and associated VLAN.

41. Verify the label for the network adapter is connected to the desired port-group on VM-FEX vDS.

42. Power on the virtual machine.

Page 108: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

43. Go to the Cisco UCS Manager, VM tab and under the Virtual Machines node expand the Host Blade. It will

show the Host Blade Chassis/Blade. Expand the virtual machines node and verify the virtual machines are

part of the VM-FEX vDS configuration on that blade with its associated VLAN.

44. Edit the setting for the powered on virtual machine that was added to VM-FEX vDS. Select Network

adapter.

45. Check the status for DirectPath I/O: it shows as Active.

Page 109: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Follow this procedure to add the remaing ESXi hosts and virtual machines as part of the VM-FEX configuration.

SAN Configuration The pair of Nexus 5548UP switches was used in the configuration to connect between the 10 Gbps iSCSI ports on

the EMC VNX5400 and the 10 GE ports of the UCS 6248 Fabric Interconnects.

Boot from SAN Benefits Booting from SAN is another key feature which helps in moving towards stateless computing in which there is no

static binding between a physical server and the OS/applications it is tasked to run. The OS is installed on a SAN

LUN and boot from SAN policy is applied to the service profile template or the service profile. If the service profile

were to be moved to another server, the pwwn of the HBAs and the Boot from SAN (BFS) policy also moves along

with it. The new server now takes the same exact character of the old server, providing the true unique stateless

nature of the UCS Blade Server.

The key benefits of booting from the network:

Reduce Server Footprints: Boot from SAN alleviates the necessity for each server to have its own direct-

attached disk, eliminating internal disks as a potential point of failure. Thin diskless servers also take up

less facility space, require less power, and are generally less expensive because they have fewer hardware

components.

Disaster and Server Failure Recovery: All the boot information and production data stored on a local SAN

can be replicated to a SAN at a remote disaster recovery site. If a disaster destroys functionality of the

servers at the primary site, the remote site can take over with minimal downtime.

Recovery from server failures is simplified in a SAN environment. With the help of snapshots, mirrors of a

failed server can be recovered quickly by booting from the original copy of its image. As a result, boot

from SAN can greatly reduce the time required for server recovery.

High Availability: A typical data center is highly redundant in nature - redundant paths, redundant disks

and redundant storage controllers. When operating system images are stored on disks in the SAN, it

supports high availability and eliminates the potential for mechanical failure of a local disk.

Rapid Redeployment: Businesses that experience temporary high production workloads can take advantage

of SAN technologies to clone the boot image and distribute the image to multiple servers for rapid

deployment. Such servers may only need to be in production for hours or days and can be readily removed

when the production need has been met. Highly efficient deployment of boot images makes temporary

server usage a cost effective endeavor.

Centralized Image Management: When operating system images are stored on networked disks, all

upgrades and fixes can be managed at a centralized location. Changes made to disks in a storage array are

readily accessible by each server.

Page 110: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

With Boot from SAN, the image resides on a SAN LUN and the server communicates with the SAN through a host

bus adapter (HBA). The HBAs BIOS contain the instructions that enable the server to find the boot disk. All FCoE-

capable Converged Network Adapter (CNA) cards supported on Cisco UCS B-series blade servers support Boot

from SAN.

After power on self-test (POST), the server hardware component fetches the boot device that is designated as the

boot device in the hardware BOIS settings. Once the hardware detects the boot device, it follows the regular boot

process.

Configuring Boot from SAN Overview There are three distinct phases during the configuration of Boot from SAN. The high level procedures are:

1. SAN configuration on the Nexus 5548UPs

2. Storage array host initiator configuration

3. Cisco UCS configuration of Boot from SAN policy in the service profile

In each of the following sections, each high level phase will be discussed.

SAN Configuration on Cisco Nexus 5548UP ISCSI boot from SAN is configured on the Cisco Nexus 5548UP switches simply by provisioning the required ports

and VLAN(s) to support this boot protocol.

Make sure you have 10 GB SFP+ modules connected to the Nexus 5548UP ports. The port mode is set to AUTO as

well as the speed is set to AUTO. Rate mode is “dedicated

The steps to prepare the Nexus 5548UPs for boot from SAN follow. We show only the configuration on Fabric A.

The same commands are used to configure the Nexus 5548UP for Fabric B, but are not shown here. The complete

configuration for both Cisco Nexus 5548UP switches is contained in the appendix to this document.

1. Enter configuration mode on each switch:

config t

2. Start by adding the npiv feature to both Nexus 5548UP switches:

feature npiv

3. Verify that the feature is enabled on both switches

show feature | grep npiv

npiv 1 enabled

4. Configure the iSCSI VLANs on both switches (275 for Fabric A and 276 for Fabric B)

vlan 275

name iSCSI_Fabric_A (N5K-A)

vlan 276

name iSCSI_Fabric_B (N5K-B)

5. Configure required port channels on both Nexus 5548UPs.

interface port-channel1

description VPC-Peerlink

switchport mode trunk

spanning-tree port type network

speed 10000

vpc peer-link

Page 111: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

interface port-channel13

description to FI-5A

switchport mode trunk

vpc 13

interface port-channel14

description to FI-5B

switchport mode trunk

vpc 14

interface port-channel25

description to DM2-0

switchport mode trunk

untagged cos 5

switchport trunk allowed vlan 275-276

vpc 25

interface port-channel26

description to DM3-0

switchport mode trunk

untagged cos 5

switchport trunk allowed vlan 275-276

vpc 26

6. Configure Ethernet interfaces on both Nexus 5548Ups:

interface Ethernet1/1

description uplink to rtpsol-ucs5-A14

switchport mode trunk

channel-group 13 mode active

interface Ethernet1/2

description uplink to rtpsol-ucs5-B14

switchport mode trunk

channel-group 14 mode active

interface Ethernet1/5

Page 112: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

description to rtpsol44-dm2-0

switchport mode trunk

switchport trunk allowed vlan 275-276

spanning-tree port type edge trunk

channel-group 25 mode active

interface Ethernet1/6

description to rtpsol44-dm3-0

switchport mode trunk

switchport trunk allowed vlan 275-276

spanning-tree port type edge trunk

channel-group 26 mode active

interface Ethernet1/17 (N5K-A)

description iSCSI link SPA-1 Boot

untagged cos 5

switchport mode trunk

switchport trunk allowed vlan 1,275

spanning-tree port type edge trunk

interface Ethernet1/18 (N5K-A)

description iSCSI link SPB-1 Boot

untagged cos 5

switchport mode trunk

switchport trunk native vlan 1

switchport trunk allowed vlan 1,276

spanning-tree port type edge trunk

interface Ethernet1/17 (N5K-B)

description iSCSI link SPA-2 Boot

untagged cos 5

switchport mode trunk

switchport trunk allowed vlan 1,275

spanning-tree port type edge trunk

interface Ethernet1/18 (N5K-B)

description iSCSI link SPB-2 Boot

Page 113: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

untagged cos 5

switchport mode trunk

switchport trunk native vlan 1

switchport trunk allowed vlan 1,276

spanning-tree port type edge trunk

The iSCSI connection was used for configuring boot from SAN for all of server blades.

Figure 20: EMC VNX5400 Target Ports

For detailed Nexus 5500 series switch configuration, refer to Cisco Nexus 5500 Series NX-OS SAN Switching

Configuration Guide. (See the References section of this document for a link.)

Configuring Boot from iSCSI SAN on EMC VNX5400 The steps required to configure boot from SAN LUNs on EMC VNX are as follows:

1. Create a storage pool from which LUNs will be provisioned. RAID type, drive number and type are

specified in the dialogue box below. Five 600GB SAS drives are used in this example to create a RAID 5

pool. Uncheck “Schedule Auto-Tiering” to disable automatic tiering.

Page 114: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

2. Provision LUNs from the storage pool created in step 1. Each LUN is 50GB in size to store the ESXi

hypervisor OS. Uncheck “Thin” to create thick LUNs.

Page 115: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

3. Create a storage group, the container used for host to LUN mapping, for each of the ESXi hosts.

4. Register host initiators with the storage array to associate a set of initiators with a given host. The

registered host will be mapped to a specific boot LUN in the following step.

Page 116: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

5. Assign each registered host to a separate storage group as shown below.

6. Assign a boot LUN to each of the storage groups. A host LUN ID is chosen to make visible to the host. It

does not need to match the array LUN ID. All boot LUNs created for the testing are assigned host LUN ID

0. After the LUN assignment is complete, the boot LUN should be visible to the ESXi host during

hypervisor installation.

Page 117: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

iSCSI SAN Configuration on Cisco UCS Manager Refer to section Cisco Unified Computing System Configuration for instructions on configuring iSCSI SAN boot

detailed instructions.

EMC VNX5400 Storage Configuration The figure below shows the physical storage layout of the disks in the reference architecture.

Page 118: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

The above storage layout is used for the following configurations:

Four SAS disks (0_0_0 to 0_0_3) are used for the VNX OE.

The VNX series does not require a dedicated hot spare drive. Disks 0_0_4 and 1_0_2 to 1_0_4 are

unbound disks that can be used as hot spares when needed. These disks are marked as hot spares in the

diagram.

Two 100GB Flash drives are used for EMC VNX FAST Cache. See the “EMC FAST Cache in Practice”

section below to follow the FAST Cache configuration best practices.

Five SAS disks (1_0_5 to 1_0_9) on the RAID 5 storage pool 1 are used to store the iSCSI boot LUNs for

vSphere hosts.

Five SAS disks (1_0_10 to 1_0_14) on the RAID 5 storage pool 2 are used to store the infrastructure virtual

machines.

Five SAS disks (0_1_0 to 0_1_4) on the RAID 5 storage pool 3 are used to store the PVS vDisks and TFTP

images.

Sixteen SAS disks (0_1_5 to 0_1_14 and 1_1_0 to 1_1_5) on the RAID 10 storage pool 4 are used to store

PVS write cache allocated for the virtual desktops.

Twenty four NL-SAS disks (1_1_6 to 1_1_14 and 0_2_0 to 0_2_14) on the RAID 6 storage pool 5 are used

to store the user profiles and home directories.

FAST Cache is enabled on all storage pools.

Disks 0_0_5 to 0_0_24 are unbound. They are not used for testing this solution.

All SAS disks used for this solution are 600GB.

Example EMC Volume Configuration for PVS Write Cache The figure below shows the layout of the NFS file systems used to store the PVS write cache for both HVD and

HSD based virtual desktops:

Page 119: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Ten LUNs of 417GB each are carved out of a RAID 10 storage pool configured with 16 SAS drives. The LUNs are

presented to VNX File as dvols that belong to a system defined NAS pool. Four 1TB file systems are then carved

out of the NAS pool and are presented to the ESXi servers as four NFS datastores, which are provisioned seamlessly

using EMC Virtual Storage Integrator (VSI) plug-in for vSphere.

EMC Storage Configuration for PVS vDisks Similar to the PVS write cache storage, ten LUNs of 208GB each are carved out of the RAID 5 storage pool

configured with 5 SAS drives to support a 250GB NFS file system that is designated to store PVS vDisks for the

desktops.

EMC Storage Configuration for VMWare ESXi 5.5 Infrastructure and VDA

Clusters One LUN of 1TB is carved out of a RAID 5 storage pool configured with 5 SAS drives. The LUN is used to store

infrastructure virtual machines such as domain controllers, SQL servers, vCenter server, XenDesktop controllers,

PVS servers, and Nexus 1000V VSMs.

Example EMC Boot LUN Configuration Each vSphere host requires an iSCSI boot LUN from SAN for the hypervisor OS. A total of 10 LUNs are carved

out of a 5-disk RAID 5 pool. Each LUN is 50GB in size.

Page 120: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

EMC FAST Cache in Practice FAST Cache is best for small random I/O where data has skew. The higher the locality, the better the benefits wil

be from FAST Cache.

General Considerations

EMC recommends first utilizing available flash drives for FAST Cache, which can globally benefit all LUNs in the

storage system. Then supplement performance as needed with additional flash drives in storage pool tiers.

Match the FAST Cache size to the size of active data set.

– For existing EMC VNX/CX4 customers, EMC Pre-Sales have tools that can help determine active data

set size.

If active dataset size is unknown, size FAST Cache to be 5 percent of your capacity, or make up any

shortfall with a flash tier within storage pools.

Consider the ratio of FAST Cache drives to working drives. Although a small FAST Cache can satisfy a

high IOPS requirement, large storage pool configurations will distribute I/O across all pool resources. A

large pool of HDDs might be able to provide better performance than a few drives of FAST Cache.

Preferred application workloads for FAST Cache:

Small-block random I/O applications with high locality

High frequency of access to the same data

Systems where current performance is limited by HDD capability, not SP capability

Avoid enabling FAST Cache for LUNs that are not expected to benefit, such as when:

The primary workload is sequential.

The primary workload is large-block I/O.

Avoid enabling FAST Cache for LUNs where the workload is small-block sequential, including:

Database logs

Circular logs

VNX OE for File SavVol (snapshot storage)

Enabling FAST Cache on a Running System

When adding FAST Cache to a running system, it is recommended to enable FAST Cache on a few LUNs at a time,

and then wait until those LUNs have equalized in FAST Cache before adding more LUNs.

FAST Cache can improve overall system performance if the current bottleneck is drive-related, but boosting the

IOPS will result in greater CPU utilization on the SPs. Systems should be sized so that the maximum sustained

utilization is 70 percent. On an existing system, check the SP CPU utilization of the system, and then proceed as

follows:

Less than 60 percent SP CPU utilization – enable groups of LUNs or one pool at a time; let them equalize

in the cache, and ensure that SP CPU utilization is still acceptable before turning on FAST Cache for more

LUNs/pools.

60-80 percent SP CPU utilization – scale in carefully; enable FAST Cache on one or two LUNs at a time,

and verify that SP CPU utilization does not go above 80 percent.

CPU greater than 80 percent – DON’T activate FAST Cache.

Avoid enabling FAST Cache for a group of LUNs where the aggregate LUN capacity exceeds 20 times the total

FAST Cache capacity.

Enable FAST Cache on a subset of the LUNs first and allow them to equalize before adding the other

LUNs.

FAST Cache is enabled as an array-wide feature in the system properties of the array in EMC Unisphere. Click the

FAST Cache tab, then click Create and select the Flash drives to create the FAST Cache. RAID 1 is the only RAID

Page 121: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

type allowed. There are no user-configurable parameters for FAST Cache. In this solution, two 100GB SSD drives

were used for FAST Cache. 0shows the FAST Cache settings for VNX5400 array used in this solution.

Figure 21: VNX5400–FAST Cache tab

To enable FAST Cache for a particular pool, navigate to the Storage Pool Properties page in Unisphere, and then

click the Advanced tab. Select Enabled to enable FAST Cache as shown in 0.

Page 122: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 22: VNX5400–Enable FAST Cache for a storage pool

EMC Additional Configuration Information The following tuning configurations optimize NFS/CIFS performance on the VNX5400 Data Movers:

NFS active threads per Data Mover The default number of threads dedicated to serve NFS/CIFS requests is 384 per Data Mover on VNX5400. Some

use cases such as the scanning of desktops might require more number of NFS active threads. It is recommended to

increase the number of active NFS and CIFS threads to 1024 on the active Data Mover to support up to 1000

desktops. The nthreads parameters can be set by using the following commands:

# server_param server_2 –facility nfs –modify nthreads –value 1024

# server_param server_2 –facility cifs –modify nthreads –value 1024

Reboot the Data Mover for the change to take effect.

Type the following command to confirm the value of the parameter:

# server_param server_2 -facility nfs -info nthreads

server_2 :

name = nthreads

facility_name = nfs

default_value = 384

current_value = 1024

configured_value = 1024

user_action = none

change_effective = immediate

range = (32,2048)

description = Number of threads dedicated to serve nfs requests,

and memory dependent.

Page 123: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

The values of NFS and CIFS active threads can also be configured by editing the properties of the nthreads Data

Mover parameter in Settings–Data Mover Parameters menu in Unisphere, as shown in Figure 23: . Type

nthreads in the filter field, highlight the nthreads value you want to edit and select Properties to open the nthreads

properties window. Update the Value field with the new value and click OK as shown in Figure 23: . Perform this

procedure for each of the nthreads Data Mover parameters listed menu. Reboot the Data Movers for the change to

take effect.

Figure 23: VNX5400–nThreads properties

Installing and Configuring ESXi 5.5

Log in to Cisco UCS 6200 Fabric Interconnect Cisco UCS Manager

The IP KVM enables the administrator to begin the installation of the operating system (OS) through remote media.

It is necessary to log in to the UCS environment to run the IP KVM.

To log in to the Cisco UCS environment, complete the following steps:

1. Open a Web browser and enter the IP address for the Cisco UCS cluster address. This step launches the Cisco

UCS Manager application.

2. Log in to Cisco UCS Manager by using the admin user name and password.

3. From the main menu, click the Servers tab.

4. Select Servers > Service Profiles > root > VM-Host-Infra-01.

5. Right-click VM-Host-Infra-01 and select KVM Console.

6. Select Servers > Service Profiles > root > VM-Host-Infra-02.

7. Right-click VM-Host-Infra-02 and select KVM Console Actions > KVM Console.

Page 124: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Set Up VMware ESXi Installation

ESXi Hosts VM-Host-Infra-01 and VM-Host-Infra-02

To prepare the server for the OS installation, complete the following steps on each ESXi host:

1. In the KVM window, click the Virtual Media tab.

2. Click Add Image.

3. Browse to the ESXi installer ISO image file and click Open.

4. Select the Mapped checkbox to map the newly added image.

5. Click the KVM tab to monitor the server boot.

6. Boot the server by selecting Boot Server and clicking OK. Then click OK again.

Install ESXi To install VMware ESXi to the SAN-bootable LUN of the hosts, complete the following steps on each host:

1. On reboot, the machine detects the presence of the ESXi installation media. Select the ESXi installer from the

menu that is displayed.

2. After the installer is finished loading, press Enter to continue with the installation.

3. Read and accept the end-user license agreement (EULA). Press F11 to accept and continue.

4. Select the EMC LUN that was previously set up as the installation disk for ESXi and press Enter to continue

with the installation.

5. Select the appropriate keyboard layout and press Enter.

6. Enter and confirm the root password and press Enter.

7. The installer issues a warning that existing partitions will be removed from the volume. Press F11 to continue

with the installation.

8. After the installation is complete, clear the Mapped checkbox (located in the Virtual Media tab of the KVM

console) to unmap the ESXi installation image.

The ESXi installation image must be unmapped to make sure that the server reboots into ESXi and not into the installer.

9. The Virtual Media window might issue a warning stating that it is preferable to eject the media from the guest.

Because the media cannot be ejected and it is read-only, simply click Yes to unmap the image.

10. From the KVM tab, press Enter to reboot the server.

Set Up Management Networking for ESXi Hosts Adding a management network for each VMware host is necessary for managing the host. To add a management

network for the VMware hosts, complete the following steps on each ESXi host:

To configure the ESXi host with access to the management network, complete the following steps:

1. After the server has finished rebooting, press F2 to customize the system.

2. Log in as root and enter the corresponding password.

3. Select the Configure the Management Network option and press Enter.

4. Select the VLAN (Optional) option and press Enter.

5. Enter the <<var_ib-mgmt_vlan_id>> and press Enter.

6. From the Configure Management Network menu, select IP Configuration and press Enter.

7. Select the Set Static IP Address and Network Configuration option by using the space bar.

Page 125: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

8. Enter the IP address for managing the first ESXi host: <<var_vm_host_infra_01_ip>>.

9. Enter the subnet mask for the first ESXi host.

10. Enter the default gateway for the first ESXi host.

11. Press Enter to accept the changes to the IP configuration.

12. Select the IPv6 Configuration option and press Enter.

13. Using the spacebar, unselect Enable IPv6 (restart required) and press Enter.

14. Select the DNS Configuration option and press Enter.

Note: Because the IP address is assigned manually, the DNS information must also be entered manually.

15. Enter the IP address of the primary DNS server.

16. Optional: Enter the IP address of the secondary DNS server.

17. Enter the fully qualified domain name (FQDN) for the first ESXi host.

18. Press Enter to accept the changes to the DNS configuration.

19. Press Esc to exit the Configure Management Network submenu.

20. Press Y to confirm the changes and return to the main menu.

21. The ESXi host reboots. After reboot, press F2 and log back in as root.

22. Select Test Management Network to verify that the management network is set up correctly and press Enter.

23. Press Enter to run the test.

24. Press Enter to exit the window.

25. Press Esc to log out of the VMware console.

Download VMware vSphere Client and vSphere Remote CLI To download the VMware vSphere Client and install Remote CLI, complete the following steps:

1. Open a Web browser on the management workstation and navigate to the VM-Host-Infra-01 management IP

address.

2. Download and install both the vSphere Client and the Windows version of vSphere Remote Command Line.

Note: These applications are downloaded from the VMware Web site and Internet access is required on the

management workstation.

Log in to VMware ESXi Hosts by Using VMware vSphere Client To log in to the ESXi host by using the VMware vSphere Client, complete the following steps:

1. Open the recently downloaded VMware vSphere Client and enter the IP address as the host you are trying to

connect to: <<var_vm_host_infra_01_ip>>.

2. Enter root for the user name.

3. Enter the root password.

4. Click Login to connect.

Download Updated Cisco VIC enic and fnic Drivers To download the Cisco virtual interface card (VIC) enic and fnic drivers, complete the following steps:

Note: The enic version used in this configuration is 2.1.2.42, and the fnic version is 1.6.0.5.

1. Open a Web browser on the management workstation and navigate to

http://software.cisco.com/download/release.html?mdfid=283853163&softwareid=283853158&release=2.0(5)&

Page 126: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

relind=AVAILABLE&rellifecycle=&reltype=latest. Login and select the driver ISO for version 2.2(1d).

Download the ISO file. Once the ISO file is downloaded, either burn the ISO to a CD or map the ISO to a drive

letter. Extract the following files from within the VMware directory for ESXi 5.5:

Network – enic-2.1.2.42-550-bundle.zip

Storage – fnic-1.6.0.5-550-bundle.zip

2. Document the saved location.

Download EMC PowerPath/VE Driver and VAAI plug-in for File To download EMC PowerPath/VE driver and VAAI plug-in for File, complete the following steps:

Note: The PowerPath/VE version used in this configuration is 5.9 SP1 build 11, and the VAAI plug-in

version is 1.0.11.

1. Open a Web browser on the management workstation and navigate to https://support.emc.com. Login and

download/extract the following files:

PowerPath/VE – EMCPower.VMWARE.5.9.SP1.b011.zip

VAAI plug-in – EMCNasPlugin-1.0-11.zip

2. Document the saved location.

Load Updated Cisco VIC enic and fnic Drivers and EMC Bundles To load the updated versions of the enic and fnic drivers for the Cisco VIC, complete the following steps for the

hosts on each vSphere Client:

1. From each vSphere Client, select the host in the inventory.

2. Click the Summary tab to view the environment summary.

3. From Resources > Storage, right-click datastore1 and select Browse Datastore.

4. Click the fourth button and select Upload File.

5. Navigate to the saved location for the downloaded enic driver version and select net-enic-2.1.2.42-1OEM.550.0.0.472560.x86_64.zip.

6. Click Open to open the file.

7. Click Yes to upload the .zip file to datastore1.

8. Click the fourth button and select Upload File.

9. Navigate to the saved location for the downloaded fnic driver version and select scsi-fnic-1.6.0.5-1OEM.550.0.0.472560.x86_64.zip.

10. Click Open to open the file.

11. Click Yes to upload the .zip file to datastore1.

12. Click the fourth button and select Upload File.

13. Navigate to the saved location for the downloaded PowerPath/VE driver version and select

EMCPower.VMWARE.5.9.SP1.b011.zip.

14. Click Open to open the file.

15. Click Yes to upload the .zip file to datastore1.

16. Click the fourth button and select Upload File.

17. Navigate to the saved location for the downloaded VAAI plug-in version and select EMCNasPlugin-1.0-11.zip.

18. Click Open to open the file.

Page 127: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

19. Click Yes to upload the .zip file to datastore1.

20. From the management workstation, open the VMware vSphere Remote CLI that was previously installed.

21. At the command prompt, run the following commands to account for each host (enic):

esxcli –s <<var_vm_host_infra_01_ip>> -u root –p <<var_password>> software vib install –-no-sig-

check –d /vmfs/volumes/datastore1/net-enic-2.1.2.42-1OEM.550.0.0.472560.x86_64.zip

esxcli –s <<var_vm_host_infra_02_ip>> -u root –p <<var_password>> software vib install –-no-sig-

check –d /vmfs/volumes/datastore1/net-enic-2.1.2.42-1OEM.550.0.0.472560.x86_64.zip

22. At the command prompt, run the following commands to account for each host (fnic):

esxcli –s <<var_vm_host_infra_01_ip>> -u root –p <<var_password>> software vib install –-no-sig-

check –d /vmfs/volumes/datastore1/scsi-fnic-1.6.0.5-1OEM.550.0.0.472560.x86_64.zip

esxcli –s <<var_vm_host_infra_02_ip>> -u root –p <<var_password>> software vib install –-no-sig-

check –d /vmfs/volumes/datastore1/scsi-fnic-1.6.0.5-1OEM.550.0.0.472560.x86_64.zip

23. At the command prompt, run the following commands to account for each host (PowerPath/VE):

esxcli –s <<var_vm_host_infra_01_ip>> -u root –p <<var_password>> software vib install –-no-sig-

check –d /vmfs/volumes/datastore1/EMCPower.VMWARE.5.9.SP1.b011.zip

esxcli –s <<var_vm_host_infra_02_ip>> -u root –p <<var_password>> software vib install –-no-sig-

check –d /vmfs/volumes/datastore1/EMCPower.VMWARE.5.9.SP1.b011.zip

24. At the command prompt, run the following commands to account for each host (VAAI plug-in):

esxcli –s <<var_vm_host_infra_01_ip>> -u root –p <<var_password>> software vib install –-no-sig-

check –d /vmfs/volumes/datastore1/EMCNasPlugin-1.0-11.zip

esxcli –s <<var_vm_host_infra_02_ip>> -u root –p <<var_password>> software vib install –-no-sig-

check –d /vmfs/volumes/datastore1/EMCNasPlugin-1.0-11.zip

25. From the vSphere Client, right-click each host in the inventory and select Reboot.

26. Select Yes to continue.

27. Enter a reason for the reboot and click OK.

28. After the reboot is complete, log back in to both hosts using the vSphere Client.

Set Up VMkernel Ports and Virtual Switch To set up the VMkernel ports and the virtual switches on the VM-Host-Infra-01 ESXi host, complete the following

steps:

1. From each vSphere Client, select the host in the inventory.

2. Click the Configuration tab.

3. Click Networking in the Hardware pane.

4. Click Properties on the right side of vSwitch0.

5. Select the vSwitch configuration and click Edit.

6. From the General tab, change the MTU to 9000.

7. Click OK to close the properties for vSwitch0.

8. Select the Management Network configuration and click Edit.

9. Change the network label to VMkernel-MGMT and select the Management Traffic checkbox.

10. Click OK to finalize the edits for Management Network.

11. Select the VM Network configuration and click Edit.

12. Change the network label to IB-MGMT Network and enter <<var_ib-mgmt_vlan_id>> in the VLAN ID

(Optional) field.

13. Click OK to finalize the edits for VM Network.

Page 128: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

14. Click Add to add a network element.

15. Select VMkernel and click Next.

16. Change the network label to VMkernel-NFS and enter <<var_nfs_vlan_id>> in the VLAN ID (Optional) field.

17. Click Next to continue with the NFS VMkernel creation.

18. Enter the IP address <<var_nfs_vlan_id_ip_host-01>> and the subnet mask <<var_nfs_vlan_id_mask_host01>>

for the NFS VLAN interface for VM-Host-Infra-01.

19. Click Next to continue with the NFS VMkernel creation.

20. Click Finish to finalize the creation of the NFS VMkernel interface.

21. Select the VMkernel-NFS configuration and click Edit.

22. Change the MTU to 9000.

23. Click OK to finalize the edits for the VMkernel-NFS network.

24. Click Add to add a network element.

25. Select VMkernel and click Next.

26. Change the network label to VMkernel-vMotion and enter <<var_vmotion_vlan_id>> in the VLAN ID

(Optional) field.

27. Select the Use This Port Group for vMotion checkbox.

28. Click Next to continue with the vMotion VMkernel creation.

29. Enter the IP address <<var_vmotion_vlan_id_ip_host-01>> and the subnet mask

<<var_vmotion_vlan_id_mask_host-01>> for the vMotion VLAN interface for VM-Host-Infra-01.

30. Click Next to continue with the vMotion VMkernel creation.

31. Click Finish to finalize the creation of the vMotion VMkernel interface.

32. Select the VMkernel-vMotion configuration and click Edit.

33. Change the MTU to 9000.

34. Click OK to finalize the edits for the VMkernel-vMotion network.

35. Close the dialog box to finalize the ESXi host networking setup. The networking for the ESXi host should be

similar to the following example:

Page 129: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

To set up the VMkernel ports and the virtual switches on the VM-Host-Infra-02 ESXi host, complete the following

steps:

36. From each vSphere Client, select the host in the inventory.

37. Click the Configuration tab.

38. Click Networking in the Hardware pane.

39. Click Properties on the right side of vSwitch0.

40. Select the vSwitch configuration and click Edit.

41. From the General tab, change the MTU to 9000.

42. Click OK to close the properties for vSwitch0.

43. Select the Management Network configuration and click Edit.

44. Change the network label to VMkernel-MGMT and select the Management Traffic checkbox.

45. Click OK to finalize the edits for the Management Network.

46. Select the VM Network configuration and click Edit.

47. Change the network label to IB-MGMT Network and enter <<var_ib-mgmt_vlan_id>> in the VLAN ID

(Optional) field.

48. Click OK to finalize the edits for the VM Network.

49. Click Add to add a network element.

Page 130: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

50. Select VMkernel and click Next.

51. Change the network label to VMkernel-NFS and enter <<var_nfs_vlan_id>> in the VLAN ID (Optional) field.

52. Click Next to continue with the NFS VMkernel creation.

53. Enter the IP address <<var_nfs_vlan_id_ip_host-02>> and the subnet mask <<var_nfs_vlan_id_mask_host-02>>

for the NFS VLAN interface for VM-Host-Infra-02.

54. Click Next to continue with the NFS VMkernel creation.

55. Click Finish to finalize the creation of the NFS VMkernel interface.

56. Select the VMkernel-NFS configuration and click Edit.

57. Change the MTU to 9000.

58. Click OK to finalize the edits for the VMkernel-NFS network.

59. Click Add to add a network element.

60. Select VMkernel and click Next.

61. Change the network label to VMkernel-vMotion and enter <<var_vmotion_vlan_id>> in the VLAN ID

(Optional) field.

62. Select the Use This Port Group for vMotion checkbox.

63. Click Next to continue with the vMotion VMkernel creation.

64. Enter the IP address <<var_vmotion_vlan_id_ip_host-02>> and the subnet mask

<<var_vmotion_vlan_id_mask_host-02>> for the vMotion VLAN interface for VM-Host-Infra-02.

65. Click Next to continue with the vMotion VMkernel creation.

66. Click Finish to finalize the creation of the vMotion VMkernel interface.

67. Select the VMkernel-vMotion configuration and click Edit.

68. Change the MTU to 9000.

69. Click OK to finalize the edits for the VMkernel-vMotion network.

70. Close the dialog box to finalize the ESXi host networking setup. The networking for the ESXi host should be

similar to the following example:

Page 131: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Mount Required Datastores To mount the required datastores, complete the following steps on each ESXi host:

1. From each vSphere Client, select the host in the inventory.

2. Click the Configuration tab to enable configurations.

3. Click Storage in the Hardware pane.

4. From the Datastore area, click Add Storage to open the Add Storage wizard.

5. Select Network File System and click Next.

6. The wizard prompts for the location of the NFS export. Enter <<var_nfs_lif02_ip>> as the IP address for

nfs_lif02.

7. Enter /infra_datastore_1 as the path for the NFS export.

8. Make sure that the Mount NFS read only checkbox is NOT selected.

9. Enter infra_datastore_1 as the datastore name.

10. Click Next to continue with the NFS datastore creation.

11. Click Finish to finalize the creation of the NFS datastore.

12. From the Datastore area, click Add Storage to open the Add Storage wizard.

13. Select Network File System and click Next.

Page 132: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

14. The wizard prompts for the location of the NFS export. Enter <<var_nfs_lif01_ip>> as the IP address for

nfs_lif01.

15. Enter /infra_swap as the path for the NFS export.

16. Make sure that the Mount NFS read only checkbox is NOT selected.

17. Enter infra_swap as the datastore name.

18. Click Next to continue with the NFS datastore creation.

19. Click Finish to finalize the creation of the NFS datastore.

Configure NTP on ESXi Hosts To configure Network Time Protocol (NTP) on the ESXi hosts, complete the following steps on each host:

1. From each vSphere Client, select the host in the inventory.

2. Click the Configuration tab to enable configurations.

3. Click Time Configuration in the Software pane.

4. Click Properties at the upper right side of the window.

5. At the bottom of the Time Configuration dialog box, click Options.

6. In the NTP Daemon Options dialog box, complete the following steps:

a. Click General in the left pane and select Start and stop with host.

b. Click NTP Settings in the left pane and click Add.

7. In the Add NTP Server dialog box, enter <<var_global_ntp_server_ip>> as the IP address of the NTP server and

click OK.

8. In the NTP Daemon Options dialog box, select the Restart NTP Service to Apply Changes checkbox and click

OK.

9. In the Time Configuration dialog box, complete the following steps:

a. Select the NTP Client Enabled checkbox and click OK.

b. Verify that the clock is now set to approximately the correct time.

Note: The NTP server time may vary slightly from the host time.

Move VM Swap File Location To move the VM swap file location, complete the following steps on each ESXi host:

1. From each vSphere Client, select the host in the inventory.

2. Click the Configuration tab to enable configurations.

3. Click Virtual Machine Swapfile Location in the Software pane.

4. Click Edit at the upper right side of the window.

5. Select Store the swapfile in a swapfile datastore selected below.

6. Select infra_swap as the datastore in which to house the swap files.

7. Click OK to finalize moving the swap file location.

Install and Configure vCenter and Clusters The procedures in the following subsections provide detailed instructions for installing VMware vCenter 5.5 in a

VSPEX environment. After the procedures are completed, a VMware vCenter Server will be configured along with

Page 133: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

a Microsoft SQL Server database to provide database support to vCenter. These deployment procedures are

customized to include the environment variables.

Note: This procedure focuses on the installation and configuration of an external Microsoft SQL Server

2012 database, but other types of external databases are also supported by vCenter. To use an alternative

database, refer to the VMware vSphere 5.5 documentation for information about how to configure the

database and integrate it into vCenter.

To install VMware vCenter 5.5, an accessible Windows Active Directory® (AD) Domain is necessary. If an existing

AD Domain is not available, an AD virtual machine, or AD pair, can be set up in this VSPEX environment.

Build Microsoft SQL Server VM To build a SQL Server virtual machine (VM) for the VM-Host-Infra-01 ESXi host, complete the following steps:

1. Log in to the host by using the VMware vSphere Client.

2. In the vSphere Client, select the host in the inventory pane.

3. Right-click the host and select New Virtual Machine.

4. Select Custom and click Next.

5. Enter a name for the VM. Click Next.

6. Select infra_datastore_1. Click Next.

7. Select Virtual Machine Version: 8. Click Next.

8. Verify that the Windows option and the Microsoft Windows Server 2012 R2(64-bit) version are selected. Click

Next.

9. Select two virtual sockets and one core per virtual socket. Click Next.

10. Select 4GB of memory. Click Next.

11. Select one network interface card (NIC).

12. For NIC 1, select the IB-MGMT Network option and the VMXNET 3 adapter. Click Next.

13. Keep the LSI Logic SAS option for the SCSI controller selected. Click Next.

14. Keep the Create a New Virtual Disk option selected. Click Next.

15. Make the disk size at least 60GB. Click Next.

16. Click Next.

17. Select the checkbox for Edit the Virtual Machine Settings Before Completion. Click Continue.

18. Click the Options tab.

19. Select Boot Options.

20. Select the Force BIOS Setup checkbox.

21. Click Finish.

22. From the left pane, expand the host field by clicking the plus sign (+).

23. Right-click the newly created SQL Server VM and click Open Console.

24. Click the third button (green right arrow) to power on the VM.

25. Click the ninth button (CD with a wrench) to map the Windows Server 2012 R2 ISO, and then select Connect to

ISO Image on Local Disk.

26. Navigate to the Windows Server 2012 R2SP1 ISO, select it, and click Open.

Page 134: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

27. Click in the BIOS Setup Utility window and use the right arrow key to navigate to the Boot menu. Use the

down arrow key to select CD-ROM Drive. Press the plus (+) key twice to move CD-ROM Drive to the top of

the list. Press F10 and Enter to save the selection and exit the BIOS Setup Utility.

28. The Windows Installer boots. Select the appropriate language, time and currency format, and keyboard. Click

Next.

29. Click Install Now.

30. Make sure that the Windows Server 2012 R2 Standard (Full Installation) option is selected. Click Next.

31. Read and accept the license terms and click Next.

32. Select Custom (Advanced). Make sure that Disk 0 Unallocated Space is selected. Click Next to allow the

Windows installation to complete.

33. After the Windows installation is complete and the VM has rebooted, click OK to set the Administrator

password.

34. Enter and confirm the Administrator password and click the blue arrow to log in. Click OK to confirm the

password change.

35. After logging in to the VM desktop, from the VM console window, select the VM menu. Under Guest, select

Install/Upgrade VMware Tools. Click OK.

36. If prompted to eject the Windows installation media before running the setup for the VMware tools, click OK,

then click OK.

37. In the dialog box, select Run setup64.exe.

38. In the VMware Tools installer window, click Next.

39. Make sure that Typical is selected and click Next.

40. Click Install.

41. Click Finish.

42. Click Yes to restart the VM.

43. After the reboot is complete, select the VM menu. Under Guest, select Send Ctrl+Alt+Del and then enter the

password to log in to the VM.

44. Set the time zone for the VM, IP address, gateway, and host name. Add the VM to the Windows AD domain.

Note: A reboot is required.

45. If necessary, activate Windows.

46. Log back in to the VM and download and install all required Windows updates.

Note: This process requires several reboots.

Install Microsoft SQL Server 2012 for vCenter To install SQL Server on the vCenter SQL Server VM, complete the following steps:

1. Connect to an AD Domain Controller in the Windows Domain and add an admin user using the Active

Directory Users and Computers tool. This user should be a member of the Domain Administrators security

group.

2. Log in to the vCenter SQL Server VM as the admin user. Open Server Manager and select Local Server.

3. Click Add Roles and Features.

4. Select Role-based or feature-based installation and click Next.

5. Select the current server as the destination server and click Next.

Page 135: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

6. Click Next

7. Expand .NET Framework 3.5 Features and select only .NET Framework 3.5.

8. Click Next.

9. Select Specify an alternate source path if necessary.

10. Click Install.

11. Click Close.

12. Open Windows Firewall with Advanced Security by navigating to Start > Administrative Tools > Windows

Firewall with Advanced Security.

13. Select Inbound Rules and click New Rule.

14. Select Port and click Next.

15. Select TCP and enter the specific local port 1433. Click Next.

16. Select Allow the Connection. Click Next, and then click Next again.

17. Name the rule SQL Server and click Finish.

18. Close Windows Firewall with Advanced Security.

19. In the vCenter SQL Server VMware console, click the ninth button (CD with a wrench) to map the Microsoft

SQL Server 2012 with SP1 ISO. Select Connect to ISO Image on Local Disk.

20. Navigate to the SQL Server 2012 with SP1 ISO, select it, and click Open.

21. In the dialog box, click Run setup.exe.

22. In the SQL Server Installation Center window, click Installation on the left.

23. Select New SQL Server stand-alone Installation or add features to an existing installation.

24. Click OK.

25. Select Enter the Product Key. Enter a product key and click Next.

Page 136: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

26. Read and accept the license terms and choose whether to select the second checkbox. Click Next.

27. Click Install to install the setup support files.

28. Address any warnings except for the Windows firewall warning. Click Next.

The Windows firewall issue was addressed in Step 15.

29. Select SQL Server Feature Installation and click Next.

30. Under Instance Features, select only Database Engine Services.

31. Under Shared Features, select Management Tools - Basic and Management Tools - Complete. Click Next.

32. Click Next.

33. Keep Default Instance selected. Click Next.

Page 137: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

34. Click Next for Disk Space Requirements.

35. For the SQL Server Agent service, click in the first cell in the Account Name column and then click

<<Browse…>>.

36. Enter the local machine administrator name (for example, systemname\Administrator), click Check

Names, and click OK.

37. Enter the administrator password in the first cell under Password.

38. Change the startup type for SQL Server Agent to Automatic.

39. For the SQL Server Database Engine service, select Administrator in the Account Name column and enter the

administrator password again. Click Next.

Page 138: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

40. Select Mixed Mode (SQL Server Authentication and Windows Authentication). Enter and confirm the password

for the SQL Server system administrator (sa) account, click Add Current User, and Click Next.

41. Choose whether to send error reports to Microsoft. Click Next.

42. Click Next.

43. Click Install.

Page 139: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

44. After the installation is complete, click Close to close the SQL Server installer.

45. Close the SQL Server Installation Center.

46. Install all available Microsoft Windows updates by navigating to Start > Control Panel > Windows Update.

47. Open the SQL Server Management Studio by selecting Start > SQL Server Management Studio.

48. Under Server Name, select the local machine name. Under Authentication, select SQL Server Authentication.

Enter sa in the Login field and enter the sa password. Click Connect.

49. Click New Query.

50. Run the following script, substituting the vpxuser password for <Password>:

use [master]

go

CREATE DATABASE [VCDB] ON PRIMARY

(NAME = N'vcdb', FILENAME = N'C:\VCDB.mdf', SIZE = 3000KB, FILEGROWTH = 10% )

LOG ON

(NAME = N'vcdb_log', FILENAME = N'C:\VCDB.ldf', SIZE = 1000KB, FILEGROWTH = 10%)

COLLATE SQL_Latin1_General_CP1_CI_AS

go

use VCDB

go

sp_addlogin @loginame=[vpxuser], @passwd=N'<Password>', @defdb='VCDB',

@deflanguage='us_english'

go

ALTER LOGIN [vpxuser] WITH CHECK_POLICY = OFF

go

CREATE USER [vpxuser] for LOGIN [vpxuser]

go

use MSDB

go

CREATE USER [vpxuser] for LOGIN [vpxuser]

go

use VCDB

go

sp_addrolemember @rolename = 'db_owner', @membername = 'vpxuser'

go

use MSDB

go

sp_addrolemember @rolename = 'db_owner', @membername = 'vpxuser'

go

This example illustrates the script.

Page 140: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

51. Click Execute and verify that the query executes successfully.

52. Close Microsoft SQL Server Management Studio.

53. Disconnect the Microsoft SQL Server 2012 ISO from the SQL Server VM.

Build and Set Up VMware vCenter VM

Build VMware vCenter VM

To build the VMware vCenter VM, complete the following steps:

1. Using the instructions for building a SQL Server VM provided in the section “The procedures in the following

subsections provide detailed instructions for installing VMware vCenter 5.5 in a VSPEX environment. After the

procedures are completed, a VMware vCenter Server will be configured along with a Microsoft SQL Server

database to provide database support to vCenter. These deployment procedures are customized to include the

environment variables.

Note: This procedure focuses on the installation and configuration of an external Microsoft SQL Server

2012 database, but other types of external databases are also supported by vCenter. To use an alternative

database, refer to the VMware vSphere 5.5 documentation for information about how to configure the

database and integrate it into vCenter.

To install VMware vCenter 5.5, an accessible Windows Active Directory® (AD) Domain is necessary. If an existing

AD Domain is not available, an AD virtual machine, or AD pair, can be set up in this VSPEX environment.

2. Build Microsoft SQL Server VM,” build a VMware vCenter VM with the following configuration in the

<<var_ib-mgmt_vlan_id>> VLAN:

4GB RAM

Two CPUs

One virtual network interface

3. Start the VM, install VMware Tools, and assign an IP address and host name to it in the Active Directory

domain.

Page 141: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Set Up VMware vCenter VM

To set up the newly built VMware vCenter VM, complete the following steps:

1. Log in to the vCenter VM as the admin user. Open Server Manager and select Local Server.

2. Click Add Roles and Features.

3. Select Role-based or feature-based installation and click Next.

4. Select the current server as the destination server and click Next.

5. Click Next.

6. Expand .NET Framework 3.5 Features and select only .NET Framework 3.5.

7. Click Next.

8. Select Specify an alternate source path if necessary.

9. Click Install.

10. Click Close to close the Add Features wizard.

11. Close Server Manager.

12. Download and install the client components of the Microsoft SQL Server 2012 Native Client from the

Microsoft Download Center.

13. Create the vCenter database data source name (DSN). Open ODBC Data Sources (64-bit) by selecting Start >

Administrative Tools > ODBC Data Sources (64-bit).

14. Click the System DSN tab.

15. Click Add.

16. Select SQL Server Native Client 11.0 and click Finish.

17. Name the data source VCDB. In the Server field, enter the IP address of the vCenter SQL server. Click Next.

18. Select With SQL Server authentication using a login ID and password entered by the user. Enter vpxuser as

the login ID and the vpxuser password. Click Next.

Page 142: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

19. Select Change the Default Database To and select VCDB from the list. Click Next.

20. Click Finish.

21. Click Test Data Source. Verify that the test completes successfully.

Page 143: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

22. Click OK and then click OK again.

23. Click OK to close the ODBC Data Source Administrator window.

24. Install all available Microsoft Windows updates by navigating to Start > Control Panel > Windows Update.

A restart might be required.

Install VMware vCenter Server To install vCenter Server on the vCenter Server VM, complete the following steps:

1. In the vCenter Server VMware console, click the ninth button (CD with a wrench) to map the VMware vCenter

ISO and select Connect to ISO Image on Local Disk.

2. Navigate to the VMware vCenter 5.5 (VIMSetup) ISO, select it, and click Open.

3. In the dialog box, click Run autorun.exe.

4. In the VMware vCenter Installer window, make sure that VMware vCenter Simple Install is selected and click

Install.

Page 144: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

5. Click Next to install vCenter Single Sign On.

6. Accept the terms of the license agreement and click Next.

7. Click Next for Simple Install Prerequisites Check.

8. Enter and confirm <<var_password>> for [email protected]. Click Next.

9. Enter a site name and click Next.

10. Click Next to select the default HTTPS port for vCenter Single Sign-On.

11. Click Next to select the default destination folder.

12. Click Install to begin the Simple Install

13. Click Yes to accept the certificate for vSphere Web Client.

Page 145: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

14. Enter the vCenter 5.5 license key and click Next.

15. Select Use an Existing Supported Database. Select VCDB from the Data Source Name list and click Next.

Page 146: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

16. Enter the vpxuser password and click Next.

17. Click Next to use the SYSTEM Account.

Page 147: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

18. Click Next to accept the default ports.

19. Select the appropriate inventory size. Click Next.

20. Click Install.

21. Click Yes to accept the certificate for vCenter Server.

22. Click Finish.

23. Click OK to confirm the installation.

24. Click Exit in the VMware vCenter Installer window.

25. Disconnect the VMware vCenter ISO from the vCenter VM.

26. Install all available Microsoft Windows updates by navigating to Start > Control Panel > Windows Updates.

A restart might be required.

Set Up ESXi 5.5 Cluster Configuration To set up ESX 5.5 cluster configuration on the vCenter Server VM, complete the following steps:

1. Using the vSphere Client, log in to the newly created vCenter Server as the admin user.

2. Click Create a data center.

3. Enter VSPEX_DC_1 as the data center name.

4. Right-click the newly created VSPEX_DC_1 data center and select New Cluster.

5. Name the cluster VSPEX_Management and select the checkboxes for Turn On vSphere HA and Turn on

vSphere DRS. Click Next.

Page 148: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

6. Accept the defaults for vSphere DRS. Click Next.

7. Accept the defaults for Power Management. Click Next.

8. Accept the defaults for vSphere HA. Click Next.

9. Accept the defaults for Virtual Machine Options. Click Next.

10. Accept the defaults for VM Monitoring. Click Next.

11. Accept the defaults for VMware EVC. Click Next.

If mixing UCS B or C-Series M2 and M3 servers within a vCenter cluster, it is necessary to enable VMware Enhanced vMotion Compatibility (EVC) mode. For more information about setting up EVC mode, refer to Enhanced vMotion Compatibility (EVC) Processor Support.

12. Select Store the swapfile in the same directory as the virtual machine (recommended). Click Next.

13. Click Finish.

14. Right-click the newly created VSPEX_Management cluster and select Add Host.

15. In the Host field, enter either the IP address or the host name of the VM-Host-Infra_01 host. Enter root

as the user name and the root password for this host. Click Next.

16. Click Yes.

Page 149: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

17. Click Next.

18. Select Assign a New License Key to the Host. Click Enter Key and enter a vSphere license key. Click OK, and

then click Next.

19. Click Next.

20. Click Next.

21. Click Finish. VM-Host-Infra-01 is added to the cluster.

22. Repeat this procedure to add VM-Host-Infra-02 to the cluster.

23. Create two additional clusters named XenDesktopHVD and XenDesktopRDS

24. Add hosts XenDesktopHVD-01 thru XenDesktopHVD-03 to the XenDesktopHVD cluster

25. Add hosts XenDesktopRDS-01 thru XenDesktopRDS-05 to the XenDesktopRDS cluster

Installing and Configuring Citrix Licensing and Provisioning

Components To prepare the required infrastructure to support the Citrix XenDesktop Hosted Virtual Desktop and Hosted Shared

Desktop environment, the following procedures were followed.

Installing Citrix License Server XenDesktop requires Citrix licensing to be installed. For this CVD, we implemented a dedicated server for

licensing. If you already have an existing license server, then Citrix recommends that you upgrade it to the latest

version when you upgrade or install new Citrix products. New license servers are backwards compatible and work

with older products and license files. New products often require the newest license server to check out licenses

correctly.

Instructions Visual

Insert the Citrix XenDesktop 7.5 ISO and launch the

installer.

Click Start under XenDesktop

Page 150: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

To begin the installation of Citrix License Server,

click on “Extend Deployment – Citrix License

Server.”

Read the Citrix License Agreement.

If acceptable, indicate your acceptance of the license

by selecting the “I have read, understand, and

accept the terms of the license agreement” radio

button.

Click Next

Page 151: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

A dialog appears to install the License Server.

Click Next

Select the default ports and automatically configured

firewall rules.

Click Next

Page 152: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

A Summary screen appears.

Click the Install button to begin the installation.

A message appears indicating that the installation has

completed successfully.

Click Finish

Copy the license files to the default location

(C:\Program Files (x86)\Citrix\Licensing\ MyFiles)

on the license server (XENLIC in this CVD). Restart

the server or services so that the licenses are

activated.

Page 153: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Run the application Citrix License Administration.

Confirm that the license files have been read and

enabled correctly.

Installing Provisioning Services In most implementations, there is a single vDisk providing the standard image for multiple target devices.

Thousands of target devices can use a single vDisk shared across multiple Provisioning Services (PVS) servers in

the same farm, simplifying virtual desktop management. This section describes the installation and configuration

tasks required to create a PVS implementation.

The PVS server can have many stored vDisks, and each vDisk can be several gigabytes in size. Your streaming

performance and manageability can be improved using a RAID array, SAN, or NAS. PVS software and hardware

requirements are available at http://support.citrix.com/proddocs/topic/provisioning-7/pvs-install-task1-plan-6-

0.html.

Prerequisites

Only one MS SQL database is associated with a farm. You can choose to install the Provisioning Services database

software on an existing SQL database, if that machine can communicate with all Provisioning Servers within the

farm, or with a new SQL Express database machine, created using the SQL Express software that is free from

Microsoft.

Page 154: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

The following MS SQL 2008, MS SQL 2008 R2, and MS SQL 2012 Server (32 or 64-bit editions) databases can be

used for the Provisioning Services database: SQL Server Express Edition, SQL Server Workgroup Edition, SQL

Server Standard Edition, SQL Server Enterprise Edition. Microsoft SQL was installed separately for this CVD.

Instructions Visual

Insert the Citrix Provisioning Services 7.1 ISO and

let AutoRun launch the installer.

Click the Server Installation button.

Click the Install Server button.

The installation wizard will check to resolve

dependencies and then begin the PVS server

installation process. It is recommended that you

temporarily disable anti-virus software prior to the

installation.

Page 155: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Click Install on the prerequisites dialog.

Click Yes when prompted to install the SQL Native

Client.

Click Next when the Installation wizard starts.

Page 156: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Review the license agreement terms.

If acceptable, select the radio button labeled “I

accept the terms in the license agreement.”

Click Next

Provide User Name, and Organization information.

Select who will see the application.

Click Next

Page 157: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Accept the default installation location.

Click Next

Click Install to begin the installation.

Page 158: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Click Finish when the install is complete.

Click OK to acknowledge the PVS console has not

yet been installed.

The PVS Configuration Wizard starts automatically.

Click Next

Page 159: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Since the PVS server is not the DHCP server for the

environment, select the radio button labeled, “The

service that runs on another computer.”

Click Next

Since this server will be a PXE server, select the

radio button labeled, “The service that runs on this

computer.”

Click Next

Page 160: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Since this is the first server in the farm, select the

radio button labeled, “Create farm”.

Click Next

Enter the name of the SQL server.

Note: If using a cluster, instead of AlwaysOn groups,

you will need to supply the instance name as well.

Click Next

Page 161: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Optionally provide a Database name, Farm name,

Site name, and Collection name for the PVS farm.

Select the Administrators group for the Farm

Administrator group.

Click Next

Provide a vDisk Store name and the storage path to

the EMC vDisk share.

NOTE: Create the share using EMC’s native support

for SMB3.

Click Next

Page 162: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Provide the FQDN of the License Server.

Optionally, provide a port number if changed on the

license server.

Click Next

If an active directory service account is not already

setup for the PVS servers, create that account prior to

clicking Next on this dialog.

Select the Specified user account radio button.

Complete the User name, Domain, Password, and

Confirm password fields, using the PVS account

information created earlier.

Click Next

Page 163: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Set the Days between password updates to 30.

NOTE: This will vary per environment. “30 days”

for the configuration was appropriate for testing

purposes.

Click Next

Keep the defaults for the network cards.

Click Next

Page 164: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Disable the Use the Provisioning Services TFTP

service checkbox as the TFTP service will be hosted

on the EMC VNX storage.

Click Next

Click Finish to start the installation.

Page 165: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

When the installation is completed, click the Done

button.

From the main installation screen, select Console

Installation.

Page 166: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Click Next

Read the Citrix License Agreement.

If acceptable, select the radio button labeled “I

accept the terms in the license agreement.”

Click Next

Page 167: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Optionally provide User Name and Organization.

Click Next

Accept the default path.

Click Next

Page 168: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Leave the Complete radio button selected.

Click Next

Click the Install button to start the console

installation.

Page 169: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

When the installation completes, click Finish to

close the dialog box.

Configuring Store and Boot Properties for PVS1

Instructions Visual

From the Windows Start screen for the Provisioning

Server PVS1, launch the Provisioning Services

Console.

Page 170: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Select Connect to Farm.

Enter localhost for the PVS1 server.

Click Connect.

Page 171: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Select Store Properties from the pull-down menu.

In the Store Properties dialog, add the Default store

path to the list of Default write cache paths.

Click Validate. If the validation is successful, click

OK to continue.

Installation of Additional PVS Servers Complete the same installation steps on the additional PVS servers up to the configuration step where it asks to

Create or Join a farm. In this CVD, we repeated the procedure to add the second and third PVS servers.

Page 172: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

On the Farm Configuration dialog, select “Join

existing farm.”

Click Next

Provide the FQDN of the SQL Server.

Click Next

Page 173: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Accept the Farm Name.

Click Next.

Accept the Existing Site.

Click Next

Page 174: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Accept the existing vDisk store.

Click Next

Provide the PVS service account information.

Click Next

Page 175: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Set the Days between password updates

to 30.

Click Next

Accept the network card settings.

Click Next

Page 176: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Disable the Use the Provisioning Services TFTP

service checkbox as the TFTP service will be hosted

on the EMC VNX storage.

Click Next

Click Finish to start the installation process.

Page 177: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Click Done when the installation finishes.

You can optionally install the Provisioning Services console on the second and third PVS virtual machines

following the procedure in section 6.8.2 Installing Provisioning Services.

After completing the steps to install the second and third PVS servers, launch the Provisioning Services Console to

verify that the PVS Servers and Stores are configured and that DHCP boot options are defined.

Installing and Configuring XenDesktop 7.5 Components This section details the installation of the core components of the XenDesktop 7.5 system.

Installing the XenDesktop Delivery Controllers This CVD installs two XenDesktop Delivery Controllers to support both hosted shared desktops (RDS) and pooled

virtual desktops (VDI).

Installing the XenDesktop Delivery Controller and Other Software Components

The process of installing the XenDesktop Delivery Controller also installs other key XenDesktop software

components, including Studio, which is used to create and manage infrastructure components, and Director, which is

used to monitor performance and troubleshoot problems.

Page 178: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Citrix recommends that you use Secure HTTP

(HTTPS) and a digital certificate to protect vSphere

communications. Citrix recommends that you use a

digital certificate issued by a certificate authority

(CA) according to your organization's security

policy. Otehrwise, if security policy allows, use the

VMware-installed self-signed certificate. To do this:

1. Add the FQDN of the computer running vCenter

Server to the hosts file on that server, located at

SystemRoot/

WINDOWS/system32/Drivers/etc/. This step is

required only if the FQDN of the computer

running vCenter Server is not already present in

DNS.

2. Open Internet Explorer and enter the address of

the computer running vCenter Server (e.g.,

https://FQDN as the URL).

3. Accept the security warnings.

4. Click the Certificate Error in the Security Status

bar and select View certificates.

5. Click Install certificate, select Local Machine,

and then click Next.

6. Select Place all certificates in the following store

and then click Browse.

7. Select Show physical stores.

8. Select Trusted People.

9. Click Next and then click Finish.

To begin the installation, connect to the first

XenDesktop server and launch the installer from the

Citrix XenDesktop 7.5 ISO.

Click Start

Page 179: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

The installation wizard presents a menu with three

subsections.

Click on “Get Started - Delivery Controller.”

Read the Citrix License Agreement.

If acceptable, indicate your acceptance of the license

by selecting the “I have read, understand, and

accept the terms of the license agreement” radio

button.

Click Next

Page 180: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Select the components to be installed:

Delivery Controller

Studio

Director

In this CVD, the License Server has already been

installed and StoreFront is installed on separate

virtual machines. Uncheck License Server and

StoreFront.

Click Next

The Microsoft SQL Server is installed separately and

Windows Remote Assistance is not required, so

uncheck the boxes to install these components.

Click Next

Page 181: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Select the default ports and automatically configured

firewall rules.

Click Next

The Summary screen is shown.

Click the Install button to begin the installation.

The installer displays a message when the installation

is complete.

Page 182: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Confirm all selected components were successfully

installed.

Verify the Launch Studio checkbox is enabled.

Click Finish

XenDesktop Controller Configuration Citrix Studio is a management console that allows you to create and manage infrastructure and resources to deliver

desktops and applications. Replacing Desktop Studio from earlier releases, it provides wizards to set up your

environment, create workloads to host applications and desktops, and assign applications and desktops to users.

Citrix Studio launches automatically after the XenDesktop Delivery Controller installation, or if necessary, it can be

launched manually. Studio is used to create a Site, which is the core XenDesktop 7 environment consisting of the

Delivery Controller and the Database.

Instructions Visual

Click on the Deliver applications and desktops to

your users button.

Page 183: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Select the “A fully configured, production-ready Site

(recommended for new users)” radio button.

Enter a site name.

Click Next

Provide the Database Server location.

Click the Test connection… button to verify that the

database is accessible.

NOTE: If using a clustered database instead of the

AlwaysOn configuration, then the SQL instance

name must also be supplied. Ignore any errors and

continue.

Page 184: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Click OK to have the installer create the database.

Click Next

Page 185: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Provide the FQDN of the license server.

Click Connect to validate and retrieve any licenses

from the server.

NOTE: If no licenses are available, you can use the

30-day free trial or activate a license file.

Select the appropriate product edition using the

license radio button

Click Next

Select the Host Type of VMware vSphere.

Enter the FQDN of the vSphere server.

Enter the username (in domain\username format) for

the vSphere account.

Provide the password for the vSphere account.

Provide a connection name.

Select the Other tools radio button since

Provisioning Services will be used.

Click Next

Enter a resource name.

Select a VMware cluster and virtual network that will

be used by the virtual desktops.

Click Next

Page 186: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Select a datastore on the Storage dialog

Click Next

Click Next on the App-V Publishing dialog

Click Finish to complete the deployment.

Page 187: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Once the deployment is complete, click the Test Site

button.

All 178 tests should pass successfully.

Click Finish

Additional XenDesktop Controller Configuration After the first controller is completely configured and the Site is operational, you can add additional controllers. In

this CVD, we created two Delivery Controllers.

Instructions Visual

To begin the installation of the second Delivery

Controller, connect to the second XenDesktop server

and launch the installer from the Citrix XenDesktop

7.5 ISO.

Click Start

Page 188: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Repeat the same steps used to install the first

Delivery Controller, including the step of importing

an SSL certificate for HTTPS between the controller

and vSphere.

Review the Summary configuration.

Click Install

Confirm all selected components were successfully

installed.

Verify the Launch Studio checkbox is enabled.

Click Finish

Page 189: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Click on the Connect this Delivery Controller to an

existing Site button.

Enter the FQDN of the first delivery controller.

Click OK

Click Yes to allow the database to be updated with

this controller’s information automatically.

Page 190: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

When complete, verify the site is functional by

clicking the Test Site button.

Click Finish to close the test results dialog.

Adding Host Connections and Resources with Citrix Studio Citrix Studio provides wizards to guide the process of setting up an environment and creating desktops. The steps

below set up a host connection for a cluster of VMs for the HVD and HSD desktops.

Note: The instructions below outline the procedure to add a host connection and resources for HVD desktops. When

you’ve completed these steps, repeat the procedure to add a host connection and resources for HSDs.

Page 191: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Connect to the XenDesktop server and launch Citrix

Studio.

From the Configuration menu, select Hosting.

Select Add Connection and Resources.

On the Connections dialog, specify vSphere55 as the

existing connection to be used.

Click Next

On the Resources dialog, specify a resource name,

cluster, and an appropriate network.

Click Next

Page 192: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

On the Storage dialog, specify the shared storage for

the new VMs. This applies to the PVS Write Cache

datastores.

Click Next

Review the Summary.

Click Finish

Installing and Configuring StoreFront Citrix StoreFront stores aggregate desktops and applications from XenDesktop sites, making resources readily

available to users. In this CVD, StoreFront is installed on a separate virtual machine from other XenDesktop

components. Log into that virtual machine and start the installation process from the Citrix XenDesktop 7.5 ISO.

The installation wizard presents a menu with three subsections.

Page 193: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

To begin the installation of StoreFront, click on

Citrix StoreFront under the “Extend Deployment”

heading.

Read the Citrix License Agreement.

If acceptable, indicate your acceptance of the license

by selecting the “I have read, understand, and

accept the terms of the license agreement” radio

button.

Click Next

Page 194: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

StoreFront is shown as the component to be

installed.

Click Next

Select the default ports and automatically configured

firewall rules.

Click Next

The Summary screen is shown.

Click the Install button to begin the installation.

The installer displays a message when the installation

is complete.

Page 195: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Verify that the checkbox to “Open the StoreFront

Management Console“ is enabled.

Click Finish

The StoreFront Management Console launches automatically after installation, or if necessary, it can be launched

manually.

Instructions Visual

Click on the “Create a new deployment” button.

Page 196: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Enter the Base URL to be used to access StoreFront

services.

Click Next

Enter a Store Name.

Click Next

On the Create Store page, specify the XenDesktop

Delivery Controller and servers that will provide

resources to be made available in the store.

Click Add.

Page 197: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

In the Add Delivery Controller dialog box, add

servers for the XenDesktop Delivery Controller. List

the servers in failover order.

Click OK to add each server to the list.

After adding the list of servers, specify a Display

name for the Delivery Controller. For testing

purposes, set the default transport type and port to

HTTP on port 80.

Click OK to add the Delivery Controller.

NOTE: HTTPS is recommended for production

deployments.

Page 198: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

On the Remote Access page, accept None (the

default).

Click the Create button to begin creating the store.

A message indicates when the store creation process

is complete. The Create Store page lists the Website

for the created store.

Click Finish

On the second StoreFront server, complete the previous installation steps up to the configuration step where the

StoreFront Management Console launches. At this point, the console allows you to choose between “Create a new

deployment” or “Join an existing server group.”

Page 199: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

For the additional StoreFront server, select “Join an

existing server group.”

In the Join Server Group dialog, enter the name of

the first Storefront server.

Before the additional Storefront server can join the

server group, you must connect to the first Storefront

server, add the second server, and obtain the required

authorization information.

Connect to the first Storefront server.

Page 200: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Using the StoreFront menu on the left, you can scroll

through the StoreFront management options.

Select Server Group from the menu.

At this point, the Server Group contains a single

Storefront server.

Select Generate Security Keys from the Actions

menu on the right. Security keys are needed for

signing and encryption.

A dialog window appears.

Click Generate Keys

Select Server Group from the menu.

To generate the authorization information that allows

the additional StoreFront server to join the server

group, select Add Server.

Copy the Authorization code from the Add Server

dialog.

Page 201: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Connect to the second Storefront server and paste the

Authorization code into the Join Server Group

dialog.

Click Join

A message appears when the second server has

joined successfully.

Click OK

The Server Group now lists both StoreFront servers

in the group.

Desktop Delivery Golden Image Creation and

Resource Provisioning This section provides details on how to use the Citrix XenDesktop 7.5 delivery infrastructure to create virtual

desktop golden images and to deploy the virtual machines.

Overview of Desktop Delivery The advantage of using Citrix Provisioning Services (PVS) is that it allows VMs to be provisioned and re-

provisioned in real-time from a single shared disk image called a virtual Disk (vDisk). By streaming a vDisk rather

than copying images to individual machines, PVS allows organizations to manage a small number of disk images

even when the number of VMs grows, providing the benefits of centralized management, distributed processing, and

efficient use of storage capacity.

In most implementations, a single vDisk provides a standardized image to multiple target devices. Multiple PVS

servers in the same farm can stream the same vDisk image to thousands of target devices. Virtual desktop

Page 202: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

environments can be customized through the use of write caches and by personalizing user settings though Citrix

User Profile Management.

This section describes the installation and configuration tasks required to create standardized master vDisk images

using PVS. This section also discusses write cache sizing and placement considerations, and how policies in Citrix

User Profile Management can be configured to further personalize user desktops.

Overview of PVS vDisk Image Management After installing and configuring PVS components, a vDisk is created from a device’s hard drive by taking a snapshot

of the OS and application image, and then storing that image as a vDisk file on the network. vDisks can exist on a

Provisioning Server, file share, or in larger deployments (as in this CVD) on a storage system with which the

Provisioning Server can communicate (via iSCSI, SAN, NAS, and CIFS). A PVS server can access many stored

vDisks, and each vDisk can be several gigabytes in size. For this solution, the vDisk was stored on a CIFS share

located on the EMC storage.

vDisks can be assigned to a single target device in Private Image Mode, or to multiple target devices in Standard

Image Mode. In Standard Image mode, the vDisk is read-only, which means that multiple target devices can stream

from a single vDisk image simultaneously. Standard Image mode reduces the complexity of vDisk management and

the amount of storage required since images are shared. In contrast, when a vDisk is configured to use Private Image

Mode, the vDisk is read/write and only one target device can access the vDisk at a time.

When a vDisk is configured in Standard Image mode, each time a target device boots, it always boots from a “clean”

vDisk image. Each target device then maintains a Write Cache to store any writes that the operating system needs to

make, such as the installation of user-specific data or applications. Each virtual desktop is assigned a Write Cache

disk (a differencing disk) where changes to the default image are recorded. Used by the virtual Windows operating

system throughout its working life cycle, the Write Cache is written to a dedicated virtual hard disk created by thin

provisioning and attached to each new virtual desktop.

Overview – Golden Image Creation For this CVD, PVS supplies these master (or “golden”) vDisk images to the target devices:

Table 8. Golden Image Descriptions

To build the vDisk images, OS images of Microsoft Windows 7 and Windows Server 2012, along with additional

software, were initially installed and prepared as standard virtual machines on vSphere. These master target VMs

(called Win7 and W2012) were then converted into a separate Citrix PVS vDisk files. Citrix PVS and the

XenDesktop Delivery Controllers use the golden vDisk images to instantiate new desktop virtual machines on

vSphere.

In this CVD, virtual machines for the hosted shared and hosted virtual desktops were created using the XenDesktop

Setup Wizard. The XenDesktop Setup Wizard (XDSW) does the following:

1. Creates VMs on a XenDesktop hosted hypervisor server from an existing template.

2. Creates PVS target devices for each new VM within a new or existing collection matching the XenDesktop

catalog name.

3. Assigns a Standard Image vDisk to VMs within the collection.

4. Adds virtual desktops to a XenDesktop Machine Catalog.

In this CVD, virtual desktops were optimized according to best practices for performance. (The “Optimize

performance” checkbox was selected during the installation of the VDA, and the “Optimize for Provisioning

Services” checkbox was selected during the PVS image creation process using the PVS Imaging Wizard.)

Page 203: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Write-cache drive sizing and placement When considering a PVS deployment, there are some design decisions that need to be made regarding the write

cache for the virtual desktop devices that leverage provisioning services. The write cache is a cache of all data that

the target device has written. If data is written to the PVS vDisk in a caching mode, the data is not written back to

the base vDisk. Instead it is written to a write cache file.

It is important to consider Write Cache sizing and placement when scaling virtual desktops using PVS server.

There are several options as to where the Write Cache can be placed, such as on the PVS server, in hypervisor

RAM, or on a device local disk (this is usually an additional vDisk for VDI instances). For this study, we used PVS

7.1 to manage desktops with write cache placed on the target device’s storage (e.g., EMC Write Cache volumes) for

each virtual machine, which allows the design to scale more effectively. Optionally, write cache files can be stored

on SSDs located on each of the virtual desktop host servers.

For Citrix PVS pooled desktops, write cache size needs to be calculated based on how often the user reboots the

desktop and type of applications used. We recommend using a write cache twice the size of RAM allocated to each

individual VM. For example, if VM is allocated with 1.5GB RAM, use at least a 3GB write cache vDisk for each

VM.

For this solution, 6GB virtual disks were assigned to the Windows 7-based virtual machines used in the desktop

creation process. The PVS Target device agent installed in the Windows 7 gold image automatically places the

Windows swap file on the same drive used by the PVS Write Cache when this mode is enabled. 50GB write cache

virtual disks were used for the Server 2012 desktop machines.

Preparing the Master Targets This section provides guidance around creating the golden (or master) images for the environment. VMs for the

master targets must first be installed with the software components needed to build the golden images. For this

CVD, the images contain the basics needed to run the Login VSI workload.

To prepare the master VMs for the Hosted Virtual Desktops (HVDs) and Hosted Shared Desktops (HSDs), there are

three major steps: installing the PVS Target Device x64 software, installing the Virtual Delivery Agents (VDAs),

and installing application software.

The master target HVD and HSD VMs were configured as follows:

Table 9. OS Configurations

vDisk Feature Hosted Virtual Desktops Hosted Shared Desktops

Virtual CPUs 1 vCPU 5 vCPUs

Memory 1.5 GB 24GB

vDisk size 40 GB 60 GB

Virtual NICs 1 virtual VMXNET3 NIC 1 virtual VMXNET3 NIC

vDisk OS Microsoft Windows 7 Enterprise

(x86)

Microsoft Windows Server 2012

Additional

software

Microsoft Office 2010, Login VSI

3.7

Microsoft Office 2010, Login VSI

3.7

Test workload Login VSI “medium” workload

(knowledge worker)

Login VSI “medium” workload

(knowledge worker)

The software installed on each image before cloning the vDisk included:

Citrix Provisioning Server Target Device (32-bit for HVD and 64-bit for HSD)

Microsoft Office Professional Plus 2010 SP1

Internet Explorer 8.0.7601.17514 (HVD only; Internet Explorer 10 is included with Windows Server 2012

by default)

Page 204: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Login VSI 3.7 (which includes additional software used for testing: Adobe Reader 9.1, Macromedia Flash,

Macromedia Shockwave, Bullzip PDF Printer, etc.).

Installing the PVS Target Device Software The Master Target Device refers to the target device from which a hard disk image is built and stored on a vDisk.

Provisioning Services then streams the contents of the vDisk created to other target devices. This procedure installs

the PVS Target Device software that is used to build the HVD and HSD golden images.

The instructions below outline the installation procedure to configure a vDisk for HVD desktops. When you’ve completed these installation steps, repeat the procedure to configure a vDisk for HSDs.

Instructions Visual

On the Master Target Device, first run Windows

Update and install any identified updates.

Click Yes to install.

Note: This step only applies to Windows 7.

Restart the machine when the installation is

complete.

Launch the PVS installer from the Provisioning

Services DVD.

Click the Target Device Installation button

Page 205: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Click the Target Device Installation button.

The installation wizard will check to resolve

dependencies and then begin the PVS target device

installation process.

The wizard's Welcome page appears.

Click Next.

Page 206: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Read the license agreement. If you agree, check the

radio button “I accept the terms in the license

agreement.”

Click Next

Enter a User and Organization names and click Next.

Page 207: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Select the Destination Folder for the PVS Target

Device program and click Next.

Confirm the installation settings and click Install.

Page 208: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

A confirmation screen appears indicating that the

installation completed successfully.

Unclick the checkbox to launch the Imaging Wizard

and click Finish.

Reboot the machine to begin the VDA installation

process.

Installing XenDesktop Virtual Desktop Agents Virtual Delivery Agents (VDAs) are installed on the server and workstation operating systems, and enable

connections for desktops and apps. The following procedure was used to install VDAs for both HVD and HSD

environments.

By default, when you install the Virtual Delivery Agent, Citrix User Profile Management is installed silently on

master images. (Using profile management as a profile solution is optional but was used for this CVD, and is

described in a later section.)

Instructions Visual

Launch the XenDesktop installer from the ISO image

or DVD.

Click Start on the Welcome Screen.

Page 209: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

To install the VDA for the Hosted VDI Desktops,

select Virtual Delivery Agent for Windows

Desktop OS. (After the VDA is installed for Hosted

VDI Desktops, repeat the procedure to install the

VDA for Hosted Shared Desktops. In this case, select

Virtual Delivery Agent for Windows Server OS

and follow the same basic steps.)

Select “Create a Master Image”.

Click Next

For the HVD vDisk, select “No, install the standard

VDA”.

Click Next

Page 210: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Select Citrix Receiver.

Click Next

Select “Do it manually” and specify the FQDN of

the Delivery Controllers.

Click Next

Accept the default features.

Click Next.

Page 211: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Allow the firewall rules to be configured

Automatically.

Click Next

Verify the Summary and click Install.

Check “Restart Machine”.

Click Finish and the machine will reboot

automatically.

Page 212: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Repeat the procedure so that VDAs are installed for both HVD (using the Windows 7 OS image) and the HSD

desktops (using the Windows Server 2012 image).

Installing Applications on the Master Targets After the VDA is installed on the target device, install the applications stack on the target. The steps below install

Microsoft Office Professional Plus 2010 SP1 and Login VSI 3.7 (which also installs Adobe Reader 9.1, Macromedia

Flash, Macromedia Shockwave, Bullzip PDF Printer, KidKeyLock, Java, and FreeMind).

Instructions Visual

Locate the installation wizard or script to install

Microsoft Office Professional Plus 2010 SP1. In this

CVD, we used the installation script shown to install

it on the target.

Run the script. The installation will begin.

Page 213: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Next, install the Login VSI 3.7 software. Locate and

run the Login VSI Target Setup Wizard.

Specify the Login VSI Share path.

Click Start

The Setup Wizard installs the Login VSI and

applications on the target. The wizard indicates when

the installation is complete.

A pop-up outlines a few follow-up configuration

steps.

Page 214: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

One of those configuration steps involves moving the

Active Directory OU for the target (Win7 or W2012)

to the OU for the Login VSI Computers.

Restart the target VM.

Confirm that the VM contains the required

applications for testing.

Clear the NGEN queues for the recently installed

applications:

"C:\Windows\Microsoft.NET\Framework\v2.0.5072

7\ngen.exe" executeQueuedItems

"C:\Windows\Microsoft.NET\Framework\v4.0.3031

9\ngen.exe" executeQueuedItems

Creating vDisks The PVS Imaging Wizard automatically creates a base vDisk image from the master target device.

Note: The instructions below describe the process of creating a vDisk for HVD desktops. When you’ve completed these steps, repeat the procedure to build a vDisk for HSDs.

Page 215: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

The PVS Imaging Wizard's Welcome page appears.

Click Next.

The Connect to Farm page appears. Enter the name

or IP address of a Provisioning Server within the

farm to connect to and the port to use to make that

connection.

Use the Windows credentials (default) or enter

different credentials.

Click Next.

Select Create new vDisk.

Click Next.

Page 216: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

The New vDisk dialog displays. Enter the name of

the vDisk, such as Win7 for the Hosted VDI Desktop

vDisk (Windows 7 OS image) or W2012 for the

Hosted Shared Desktop vDisk (Windows Server

2012 image). Select the Store where the vDisk will

reside. Select the vDisk type, either Fixed or

Dynamic, from the drop-down menu. (This CVD

used Dynamic rather than Fixed vDisks.)

Click Next.

On the Microsoft Volume Licensing page, select the

volume license option to use for target devices. For

this CVD, volume licensing is not used, so the None

button is selected.

Click Next.

Define volume sizes on the Configure Image

Volumes page. (For the HVDs and HSDs, vDisks of

40GB and 60GB, respectively, were defined.)

Click Next.

Page 217: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

The Add Target Device page appears.

Select the Target Device Name, the MAC address

associated with one of the NICs that was selected

when the target device software was installed on the

master target device, and the Collection to which

you are adding the device.

Click Next.

A Summary of Farm Changes appears.

Select Optimize for Provisioning Services.

The PVS Optimization Tool appears. Select the

appropriate optimizations and click OK.

Review the configuration and click Finish.

Page 218: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

The vDisk creation process begins. A dialog appears

when the creation process is complete.

Reboot and then configure the BIOS/VM settings for

PXE/network boot, putting Network boot from

VMware VMXNET3 at the top of the boot device

list.

After restarting, log into the HVD or HSD master

target. The PVS Imaging conversion process begins,

converting C: to the PVS vDisk. A message is

displayed when the conversion is complete.

Page 219: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Connect to the PVS server and validate that the

vDisk image is visible.

On the vDisk Properties dialog, change Access

mode to “Standard Image (multi-device, read-only

access)”.

Set the Cache Type to “Cache on device hard

drive.”

Click OK

Repeat this procedure to create vDisks for both the Hosted VDI Desktops (using the Windows 7 OS image) and the

Hosted Shared Desktops (using the Windows Server 2012 image).

Creating Desktops with the PVS XenDesktop Setup Wizard Provisioning Services includes the XenDesktop Setup Wizard, which automates the creation of virtual machines to

support HVD and HSD use cases.

Note: The instructions below outline the procedure to run the wizard and create VMs for HVD desktops. When you’ve completed these steps, repeat the procedure to create VMs for HSD desktops.

Page 220: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Start the XenDesktop Setup Wizard from the

Provisioning Services Console.

Right-click on the Site.

Choose XenDesktop Setup Wizard… from the

context menu.

On the opening dialog, click Next.

Page 221: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Enter the XenDesktop Controller address that will

be used for the wizard operations.

Click Next

Select the Host Resources on which the virtual

machines will be created.

Click Next

Provide the Host Resources Credentials

(Username and Password) to the XenDesktop

controller when prompted.

Click OK

Page 222: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Select the Template created earlier.

Click Next

Select the vDisk that will be used to stream to the

virtual machine.

Click Next

Page 223: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Select “Create a new catalog”.

NOTE: The catalog name is also used as the

collection name in the PVS site.

Click Next

On the Operating System dialog, specify the

operating system for the catalog. Specify Windows

Desktop Operating System for HVDs and

Windows Server Operating System for HSDs.

Click Next

Page 224: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

If you specified a Windows Desktop OS for HVDs, a

User Experience dialog appears. Specify that the

user will connect to “A fresh new (random)

desktop each time.”

Click Next

On the Virtual machines dialog, specify:

- The number of VMs to create. (Note that it is

recommended to create 40 or less per run, and

we create a single VM at first to verify the

procedure.)

- Number of vCPUs for the VM

(1 for HVDs, 5 for HSDs)

- The amount of memory for the VM

(1.5GB for HVDs, 24GB for HSDs)

- The write-cache disk size

(6GB for HVDs, 50GB for HSDs)

- PXE boot as the Boot Mode

Click Next

Page 225: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Select the Create new accounts radio button.

Click Next

Specify the Active Directory Accounts and

Location. This is where the wizard should create the

computer accounts.

Provide the Account naming scheme (e.g.,

TestHVD### or TestHSD###). An example name is

shown in the text box below the name scheme

selection location.

Click Next

Page 226: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Click Finish to begin the virtual machine creation.

Then the wizard is done creating the virtual

machines, click Done.

NOTE: VM setup takes ~45 seconds per provisioned

virtual desktop.

Page 227: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

Start one of the newly created virtual machines and

confirm that it boots and operates successfully.

Using vCenter, the Virtual Machines tab should also

show that the VM is Powered On and operational.

Creating Delivery Groups Delivery Groups are collections of machines that control access to desktops and applications. With Delivery

Groups, you can specify which users and groups can access which desktops and applications.

Note: The instructions below outline the procedure to create a Delivery Group for HSD desktops. When you’ve completed these steps, repeat the procedure to a Delivery Group for HVD desktops.

Instructions Visual

Connect to a XenDesktop server and launch Citrix

Studio.

Choose Create Delivery Group from the pull-down

menu.

Page 228: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

An information screen may appear.

Click Next

Specify the Machine Catalog and increment the

number of machines to add.

Click Next

Specify what the machines in the catalog will deliver:

Desktops, Desktops and Applications, or

Applications.

Select Desktops.

Click Next

Page 229: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

To make the Delivery Group available, you must add

users.

Click Add Users

In the Select Users or Groups dialog, add users or

groups.

Click OK. When users have been added, click Next

on the Assign Users dialog (shown above).

Enter the StoreFront configuration for how Receiver

will be installed on the machines in this Delivery

Group. Click “Manually, using a StoreFront server

address that I will provide later.”

Click Next

Page 230: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Instructions Visual

On the Summary dialog, review the configuration.

Enter a Delivery Group name and a Display name

(e.g., HVD or HSD).

Click Finish

Citrix Studio lists the created Delivery Groups and

the type, number of machines created, sessions, and

applications for each group in the Delivery Groups

tab.

On the pull-down menu, select “Turn on

Maintenance Mode.”

Citrix XenDesktop Policies and Profile Management Policies and profiles allow the Citrix XenDesktop environment to be easily and efficiently customized.

Configuring Citrix XenDesktop Policies Citrix XenDesktop policies control user access and session environments, and are the most efficient method of

controlling connection, security, and bandwidth settings. You can create policies for specific groups of users,

devices, or connection types with each policy. Policies can contain multiple settings and are typically defined

through Citrix Studio. (The Windows Group Policy Management Console can also be used if the network

environment includes Microsoft Active Directory and permissions are set for managing Group Policy Objects). The

screenshot below shows policies for Login VSI testing in this CVD.

Page 231: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 24: XenDesktop Policy

Configuring User Profile Management Profile management provides an easy, reliable, and high-performance way to manage user personalization settings in

virtualized or physical Windows environments. It requires minimal infrastructure and administration, and provides

users with fast logons and logoffs. A Windows user profile is a collection of folders, files, registry settings, and

configuration settings that define the environment for a user who logs on with a particular user account. These

settings may be customizable by the user, depending on the administrative configuration. Examples of settings that

can be customized are:

Desktop settings such as wallpaper and screen saver

Shortcuts and Start menu setting

Internet Explorer Favorites and Home Page

Microsoft Outlook signature

Printers

Some user settings and data can be redirected by means of folder redirection. However, if folder redirection is not

used these settings are stored within the user profile.

The first stage in planning a profile management deployment is to decide on a set of policy settings that together

form a suitable configuration for your environment and users. The automatic configuration feature simplifies some

of this decision-making for XenDesktop deployments. Screenshots of the User Profile Management interfaces that

Page 232: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

establish policies for this CVD’s HVD and HSD users (for testing purposes) are shown below. Basic profile

management policy settings are documented here: http://support.citrix.com/proddocs/topic/xendesktop-71/cds-

policies-rules-pm.html.

Figure 25: HVD User Profile Manager Policy

Figure 26: HSD User Profile Manager Policy

Test Setup and Configurations In this project, we tested a single UCS B200 M3 blade in a single chassis and eight B200 M3 blades in 2 chassis to

illustrate linear scalability for each workload studied.

Page 233: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Cisco UCS Test Configuration for Single Blade Scalability

Figure 27: Cisco UCS B200 M3 Blade Server for Single Server Scalability XenDesktop 7.5 HVD with PVS 7.1 Login

VSImax

Page 234: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 28: Cisco UCS B200 M3 Blade Server for Single Server Scalability XenDesktop 7. 5 RDS with VM-FEX and PVS

7.1 Login VSImax

Hardware components

1 X Cisco UCS B200-M3 (E5-2680v2 @ 2.8 GHz) blade server with 384GB RAM (24 X 16 GB DIMMS

@ 1866 MHz) running ESXi 5.5 as Windows 7 SP1 32-bit Virtual Desktop hosts or 256GB RAM (16 X

16 GB DIMMS at 1866 MHZ) running ESXi 5.5 as Windows Server 2012 virtual desktop session hosts

2 X Cisco UCS B200-M3 (E5-2650v2) blade servers with 128 GB of memory (16 GB X 8 DIMMS @

1866 MHz) Infrastructure Servers

4 X Cisco UCS B200-M3 (E5-2680 @ 2.7 GHz) blade servers with 128 GB of memory (16 GB X 8

DIMMS @ 1866 MHz) Load Generators (Optional for testing purposes only)

1X VIC1240 Converged Network Adapter/Blade (B200 M3)

2 X Cisco UCS 6248UP Fabric Interconnects

2 X Cisco Nexus 5548UP Access Switches

1 X EMC VNX5400 system with 32 x 600GB SAS drives, 25 x 2TB Near Line SAS Drives, 3 x 100GB

Flash Drives (Fast Cache) including hot spares.

Software components

Cisco UCS firmware 2.2(1d)

Cisco Nexus 1000V virtual distributed switch

Cisco Virtual Machine Fabric Extender (VM-FEX)

VMware ESXi 5.5 VDI Hosts

Citrix XenDesktop 7.5 Hosted Virtual Desktops and RDS Hosted Shared Desktops

Page 235: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Citrix Provisioning Server 7.1

Citrix User Profile Manager

Microsoft Windows 7 SP1 32 bit, 1vCPU, 1.5 GB RAM, 17 GB hard disk/VM

Microsoft Windows Server 2012 SP1, 5vCPU, 24GB RAM, 50 GB hard disk/VM

Cisco UCS Configuration for Two Chassis – Eight Mixed

Workload Blade Test 1000 Users

Figure 29: Two Chassis Test Configuration - 12 B200 M3 Blade Servers – 2000 Mixed Workload Users

Hardware components

3 X Cisco UCS B200-M3 (E5-2680v2 @ 2.8 GHz) blade server with 384GB RAM (24 GB X 16 DIMMS

@ 1866 MHz) running ESXi 5.5 as Windows 7 SP1 32-bit Virtual Desktop hosts (300 Desktops with

Server N+1 Fault Tolerance)

5X Cisco UCS B200-M3 (E5-2680v2 @ 2.8 GHz) blade server with 256GB RAM (16GB X 16 DIMMS @

1866 MHZ) running ESXi 5.5 as Windows Server 2012 virtual desktop session hosts (700 RDS Sessions

with Server N+1 Fault Tolerance)

Page 236: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

2 X Cisco UCS B200-M3 (E5-2650v2) blade servers with 128 GB of memory (16 GB X 8 DIMMS @

1866 MHz) Infrastructure Servers

4 X Cisco UCS B200-M3 (E5-2680 @ 2.7 GHz) blade servers with 128 GB of memory (16 GB X 8

DIMMS @ 1866 MHz) Load Generators (Optional for testing purposes only)

1X VIC1240 Converged Network Adapter/Blade (B200 M3)

2 X Cisco UCS 6248UP Fabric Interconnects

2 X Cisco Nexus 5548UP Access Switches

1 X EMC VNX5400 system with 32 x 600GB SAS drives, 25 x 2TB Near Line SAS Drives, 3 x 100GB

Flash Drives (Fast Cache) including hot spares.

Software components

Cisco UCS firmware 2.2(1d)

Cisco Nexus 1000V virtual distributed switch

Cisco Virtual Machine Fabric Extender (VM-FEX)

VMware ESXi 5.5 VDI Hosts

Citrix XenDesktop 7.5 Hosted Virtual Desktops and RDS Hosted Shared Desktops

Citrix Provisioning Server 7.1

Citrix User Profile Manager

Microsoft Windows 7 SP1 32 bit, 1vCPU, 1.5 GB RAM, 17 GB hard disk/VM

Microsoft Windows Server 2012 SP1, 5 vCPU, 24GB RAM, 50 GB hard disk/VM

Testing Methodology and Success Criteria All validation testing was conducted on-site within the EMC labs in Research Triangle Park, North Carolina.

The testing results focused on the entire process of the virtual desktop lifecycle by capturing metrics during the

desktop boot-up, user logon and virtual desktop acquisition (also referred to as ramp-up,) user workload execution

(also referred to as steady state), and user logoff for the XenDesktop 7.5 Hosted Virtual Desktop and RDS Hosted

Shared models under test.

Test metrics were gathered from the hypervisor, virtual desktop, storage, and load generation software to assess the

overall success of an individual test cycle. Each test cycle was not considered passing unless all of the planned test

users completed the ramp-up and steady state phases (described below) and unless all metrics were within the

permissible thresholds as noted as success criteria.

Three successfully completed test cycles were conducted for each hardware configuration and results were found to

be relatively consistent from one test to the next.

Load Generation

Within each test environment, load generators were utilized to put demand on the system to simulate multiple users

accessing the XenDesktop 7.5 environment and executing a typical end-user workflow. To generate load within the

environment, an auxiliary software application was required to generate the end user connection to the XenDesktop

7.5 environment, to provide unique user credentials, to initiate the workload, and to evaluate the end user

experience.

In the Hosted VDI test environment, sessions launchers were used simulate multiple users making a direct

connection to XenDesktop 7.5 via a Citrix HDX protocol connection.

User Workload Simulation – LoginVSI From Login VSI Inc. One of the most critical factors of validating a desktop virtualization deployment is identifying a real-world user

workload that is easy for customers to replicate and standardized across platforms to allow customers to realistically

test the impact of a variety of worker tasks. To accurately represent a real-world user workload, a third-party tool

from Login VSI Inc was used throughout the Hosted VDI testing.

The tool has the benefit of taking measurements of the in-session response time, providing an objective way to

measure the expected user experience for individual desktop throughout large scale testing, including login storms.

Page 237: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

The Login Virtual Session Indexer (Login VSI Inc’ Login VSI 3.7) methodology, designed for benchmarking Server

Based Computing (SBC) and Virtual Desktop Infrastructure (VDI) environments is completely platform and

protocol independent and hence allows customers to easily replicate the testing results in their environment. NOTE:

In this testing, we utilized the tool to benchmark our VDI environment only.

Login VSI calculates an index based on the amount of simultaneous sessions that can be run on a single machine.

Login VSI simulates a medium workload user (also known as knowledge worker) running generic applications such

as: Microsoft Office 2007 or 2010, Internet Explorer 8 including a Flash video applet and Adobe Acrobat Reader

(Note: For the purposes of this test, applications were installed locally, not streamed by ThinApp).

Like real users, the scripted Login VSI session will leave multiple applications open at the same time. The medium

workload is the default workload in Login VSI and was used for this testing. This workload emulated a medium

knowledge working using Office, IE, printing and PDF viewing.

Once a session has been started the medium workload will repeat every 12 minutes.

During each loop the response time is measured every 2 minutes.

The medium workload opens up to 5 apps simultaneously.

The type rate is 160ms for each character.

Approximately 2 minutes of idle time is included to simulate real-world users.

Each loop will open and use:

Outlook 2007/2010, browse 10 messages.

Internet Explorer, one instance is left open (BBC.co.uk), one instance is browsed to Wired.com,

Lonelyplanet.com and heavy

480 p Flash application gettheglass.com.

Word 2007/2010, one instance to measure response time, one instance to review and edit document.

Bullzip PDF Printer & Acrobat Reader, the word document is printed and reviewed to PDF.

Excel 2007/2010, a very large randomized sheet is opened.

PowerPoint 2007/2010, a presentation is reviewed and edited.

7-zip: using the command line version the output of the session is zipped.

A graphical representation of the medium workload is shown below.

Page 238: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Graphical overview:

You can obtain additional information and a free test license from http://www.loginvsi.com.

Testing Procedure The following protocol was used for each test cycle in this study to insure consistent results.

Pre-Test Setup for Single and Multi-Blade Testing

All virtual machines were shut down utilizing the XenDesktop 7.5 Administrator and vCenter.

All Launchers for the test were shut down. They were then restarted in groups of 10 each minute until the required

number of launchers was running with the Login VSI Agent at a “waiting for test to start” state.

All VMware ESXi 5.5 VDI host blades to be tested were restarted prior to each test cycle.

Page 239: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Test Run Protocol

To simulate severe, real-world environments, Cisco requires the log-on and start-work sequence, known as Ramp

Up, to complete in 30 minutes. Additionally, we require all sessions started, whether 195 single server users or 600

full scale test users to become active within 2 minutes after the last session is launched.

In addition, Cisco requires that the Login VSI Parallel Launching method is used for all single server and scale

testing. This assures that our tests represent real-world scenarios. (NOTE: The Login VSI Sequential Launching

method allows the CPU, storage and network components to rest between each logins. This does not produce results

that are consistent with the real-world scenarios that our Customers run in.)

For each of the three consecutive runs on single server tests, the same process was followed:

1. Time 0:00:00 Started ESXTOP Logging on the following systems:

VDI Host Blades used in test run

DDCs used in test run

Profile Server(s) used in test run

SQL Server(s) used in test run

3 Launcher VMs

2. Time 0:00:10 Started EMC Logging on the controllers

3. Time 0:00:15 Started Perfmon logging on key infrastructure VMs

4. Time 0:05 Take test desktop Delivery Group(s) out of maintenance mode on XenDesktop 7.5

Studio

5. Time 0:06 First machines boot

6. Time 0:26 Test desktops or RDS servers booted

7. Time 0:28 Test desktops or RDS servers registered with XenDesktop 7.5 Studio

8. Time 1:28 Start Login VSI 3.6 Test with test desktops utilizing Login VSI Launchers (25

Sessions per)

9. Time 1:58 All test sessions launched

10. Time 2:00 All test sessions active

11. Time 2:15 Login VSI Test Ends

12. Time 2:30 All test sessions logged off

13. Time 2:35 All logging terminated

Success Criteria There were multiple metrics that were captured during each test run, but the success criteria for considering a single

test run as pass or fail was based on the key metric, VSImax. The Login VSImax evaluates the user response time

during increasing user load and assesses the successful start-to-finish execution of all the initiated virtual desktop

sessions.

Login VSImax VSImax represents the maximum number of users the environment can handle before serious performance

degradation occurs. VSImax is calculated based on the response times of individual users as indicated during the

workload execution. The user response time has a threshold of 4000ms and all users response times are expected to

be less than 4000ms in order to assume that the user interaction with the virtual desktop is at a functional level.

VSImax is reached when the response times reaches or exceeds 4000ms for 6 consecutive occurrences. If VSImax is

reached, that indicates the point at which the user experience has significantly degraded. The response time is

generally an indicator of the host CPU resources, but this specific method of analyzing the user experience provides

an objective method of comparison that can be aligned to host CPU performance.

Note: In the prior version of Login VSI, the threshold for response time was 2000ms. The workloads and the analysis have been upgraded in Login VSI 3 to make the testing more aligned to real-world use. In the medium workload in Login VSI 3.0, a CPU intensive 480p flash movie is incorporated in each test loop. In general, the redesigned workload would result in an approximate 20% decrease in the number of users passing the test versus Login VSI 2.0 on the same server and storage hardware.

Page 240: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Calculating VSIMax Typically the desktop workload is scripted in a 12-14 minute loop when a simulated Login VSI user is logged on.

After the loop is finished it will restart automatically. Within each loop the response times of seven specific

operations is measured in a regular interval: six times in within each loop. The response times if these seven

operations are used to establish VSImax.

The seven operations from which the response times are measured are:

Copy new document from the document pool in the home drive

– This operation will refresh a new document to be used for measuring the response time. This activity is

mostly a file-system operation.

Starting Microsoft Word with a document

– This operation will measure the responsiveness of the Operating System and the file system. Microsoft

Word is started and loaded into memory, also the new document is automatically loaded into Microsoft

Word. When the disk I/O is extensive or even saturated, this will impact the file open dialogue

considerably.

Starting the “File Open” dialogue

– This operation is handled for small part by Word and a large part by the operating system. The file

open dialogue uses generic subsystems and interface components of the OS. The OS provides the

contents of this dialogue.

Starting “Notepad”

– This operation is handled by the OS (loading and initiating notepad.exe) and by the Notepad.exe itself

through execution. This operation seems instant from an end-user’s point of view.

Starting the “Print” dialogue

– This operation is handled for a large part by the OS subsystems, as the print dialogue is provided by

the OS. This dialogue loads the print-subsystem and the drivers of the selected printer. As a result, this

dialogue is also dependent on disk performance.

Starting the “Search and Replace” dialogue \

– This operation is handled within the application completely; the presentation of the dialogue is almost

instant. Serious bottlenecks on application level will impact the speed of this dialogue.

Compress the document into a zip file with 7-zip command line

– This operation is handled by the command line version of 7-zip. The compression will very briefly

spike CPU and disk I/O.

These measured operations with Login VSI do hit considerably different subsystems such as CPU (user and kernel),

Memory, Disk, the OS in general, the application itself, print, GDI, etc. These operations are specifically short by

nature. When such operations are consistently long: the system is saturated because of excessive queuing on any

kind of resource. As a result, the average response times will then escalate. This effect is clearly visible to end-users.

When such operations consistently consume multiple seconds the user will regard the system as slow and

unresponsive.

With Login VSI 3.0 and later it is now possible to choose between ‘VSImax Classic’ and 'VSImax Dynamic’ results

analysis. For these tests, we utilized VSImax Dynamic analysis.

VSIMax Dynamic VSImax Dynamic is calculated when the response times are consistently above a certain threshold. However, this

threshold is now dynamically calculated on the baseline response time of the test.

Page 241: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Five individual measurements are weighted to better support this approach:

Copy new doc from the document pool in the home drive: 100%

Microsoft Word with a document: 33.3%

Starting the “File Open” dialogue: 100%

Starting “Notepad”: 300%

Starting the “Print” dialogue: 200%

Starting the “Search and Replace” dialogue: 400%

Compress the document into a zip file with 7-zip command line 200%

A sample of the VSImax Dynamic response time calculation is displayed below:

Then the average VSImax response time is calculated based on the amount of active Login VSI users logged on to

the system. For this the average VSImax response times need to consistently higher than a dynamically calculated

threshold.

To determine this dynamic threshold, first the average baseline response time is calculated. This is done by

averaging the baseline response time of the first 15 Login VSI users on the system.

The formula for the dynamic threshold is: Avg. Baseline Response Time x 125% + 3000. As a result, when the

baseline response time is 1800, the VSImax threshold will now be 1800 x 125% + 3000 = 5250ms.

Especially when application virtualization is used, the baseline response time can wildly vary per vendor and

streaming strategy. Therefore it is recommend to use VSImax Dynamic when comparisons are made with

application virtualization or anti-virus agents. The resulting VSImax Dynamic scores are aligned again with

saturation on a CPU, Memory or Disk level, also when the baseline response time are relatively high.

Determining VSIMax The Login VSI analyzer will automatically identify the “VSImax”. In the example below the VSImax is 98. The

analyzer will automatically determine “stuck sessions” and correct the final VSImax score.

Vertical axis: Response Time in milliseconds

Horizontal axis: Total Active Sessions

Page 242: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 30: Sample Login VSI Analyzer Graphic Output

Red line: Maximum Response (worst response time of an individual measurement within a single session)

Orange line: Average Response Time within for each level of active sessions

Blue line: the VSImax average.

Green line: Minimum Response (best response time of an individual measurement within a single session)

In our tests, the total number of users in the test run had to login, become active and run at least one test loop and log

out automatically without reaching the VSImax to be considered a success.

Note: We discovered a technical issue with the VSIMax dynamic calculation in our testing on Cisco UCS B200 M3 blades where the VSIMax Dynamic was not reached during extreme conditions. Working with Login VSI Inc, we devised a methodology to validate the testing without reaching VSIMax Dynamic until such time as a new calculation is available.

Our Login VSI “pass” criteria, accepted by Login VSI Inc for this testing follows:

a. Cisco will run tests at a session count level that effectively utilizes the blade capacity measured by CPU

utilization, Memory utilization, Storage utilization and Network utilization.

b. We will use Login VSI to launch version 3.7 medium workloads, including flash.

c. Number of Launched Sessions must equal Active Sessions within two minutes of the last session launched

in a test.

d. The XenDesktop 7.5 Studio will be monitored throughout the steady state to insure that:

– All running sessions report In Use throughout the steady state

– No sessions move to Agent unreachable or disconnected state at any time during Steady State

e. Within 20 minutes of the end of the test, all sessions on all Launchers must have logged out automatically

and the Login VSI Agent must have shut down.

f. We will publish our CVD with our recommendation following the process above and will note that we did

not reach a VSIMax dynamic in our testing due to a technical issue with the analyzer formula that

calculates VSIMax.

Page 243: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Citrix XenDesktop 7.5 Hosted Virtual Desktop and

RDS Hosted Shared Desktop Mixed Workload on

Cisco UCS B200 M3 Blades, EMC VNX5400

Storage and VMware ESXi 5.5 Test Results The purpose of this testing is to provide the data needed to validate Citrix XenDesktop 7.5 Hosted Virtual Desktops

and Citrix XenDesktop 7.5 RDS Hosted Shared Desktop models with Citrix Provisioning Services 7.1 using ESXi

5.5 and vCenter 5.5 to virtualize Microsoft Windows 7 SP1 desktops and Microsoft Windows Server 2012 sessions

on Cisco UCS B200 M3 Blade Servers using a EMC VNX5400 storage system.

The information contained in this section provides data points that a customer may reference in designing their own

implementations. These validation results are an example of what is possible under the specific environment

conditions outlined here, and do not represent the full characterization of XenDesktop with VMware vSphere.

Two test sequences, each containing three consecutive test runs generating the same result, were performed to

establish single blade performance and multi-blade, linear scalability.

One series of stress tests on a single blade server was conducted to establish the official Login VSI Max Score.

To reach the Login VSI Max with XenDesktop 7.5 Hosted Virtual Desktops, we ran 202 Medium workload with

flash Windows 7 SP1 sessions on a single blade. The consistent Login VSI score was achieved on three consecutive

runs and is shown below.

Figure 31: Login VSI Max Reached: 187 Users XenDesktop 7.5 Hosted VDI with PVS write-cache on VNX5400

To reach the Login VSI Max with XenDesktop 7.5 RDS Hosted Shared Desktop, we ran 256 Medium workload

with flash Windows Server 2012 desktop sessions on a single blade. The consistent Login VSI score was achieved

on three consecutive runs and is shown below.

Page 244: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 32: Login VSI Max Reached: 235 Users XenDesktop 7.5 RDS Hosted Shared Desktops

Single-Server Recommended Maximum Workload For both the XenDesktop 7.5 Hosted Virtual Desktop and RDS Hosted Shared Desktop use cases, a recommended

maximum workload was determined that was based on both Login VSI Medium workload with flash end user

experience measures and blade server operating parameters.

This recommended maximum workload approach allows you to determine the server N+1 fault tolerance load the

blade can successfully support in the event of a server outage for maintenance or upgrade.

Our recommendation is that the Login VSI Average Response and VSI Index Average should not exceed the

Baseline plus 2000 milliseconds to insure that end user experience is outstanding. Additionally, during steady state,

the processor utilization should average no more than 90-95%. (Memory should never be oversubscribed for

Desktop Virtualization workloads.)

XenDesktop 7.5 Hosted Virtual Desktop Single Server Maximum

Recommended Workload The maximum recommended workload for a B200 M3 blade server with dual E5-2680 v2 processors and 384GB of

RAM is 160 Windows 7 32-bit virtual machines with 1 vCPU and 1.5GB RAM. Login VSI and blade performance

data follow.

Page 245: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Performance data for the server running the workload follows:

Figure 33: XenDesktopHVD-01 Hosted Virtual Desktop Server CPU Utilization

0

10

20

30

40

50

60

70

80

90

100

1:0

3:3

6 A

M

1:0

6:5

1 A

M

1:1

0:0

6 A

M

1:1

3:2

1 A

M

1:1

6:3

8 A

M

1:1

9:5

7 A

M

1:2

3:1

6 A

M

1:2

6:3

5 A

M

1:2

9:5

5 A

M

1:3

3:1

5 A

M

1:3

6:3

5 A

M

1:3

9:5

5 A

M

1:4

3:1

5 A

M

1:4

6:3

5 A

M

1:4

9:5

5 A

M

1:5

3:1

5 A

M

1:5

6:3

5 A

M

1:5

9:5

5 A

M

2:0

3:1

6 A

M

2:0

6:3

6 A

M

2:0

9:5

7 A

M

2:1

3:1

8 A

M

2:1

6:4

0 A

M

2:2

0:0

5 A

M

2:2

3:3

1 A

M

2:2

6:5

6 A

M

2:3

0:2

2 A

M

2:3

3:4

7 A

M

2:3

7:1

2 A

M

2:4

0:3

6 A

M

2:4

3:5

6 A

M

2:4

7:1

6 A

M

2:5

0:3

6 A

M

2:5

3:5

5 A

M

\\vsphere2.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Core Util Time

Page 246: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 34: XenDesktopHVD-01 Hosted Virtual Desktop Server Memory Utilization

Figure 35: XenDesktopHVD-01 Hosted Virtual Desktop Server Network Utilization

0

50000

100000

150000

200000

250000

300000

1:0

3:3

6 A

M

1:0

7:0

1 A

M

1:1

0:2

6 A

M

1:1

3:5

2 A

M

1:1

7:2

0 A

M

1:2

0:4

9 A

M

1:2

4:1

9 A

M

1:2

7:4

9 A

M

1:3

1:1

9 A

M

1:3

4:4

9 A

M

1:3

8:2

0 A

M

1:4

1:5

1 A

M

1:4

5:2

1 A

M

1:4

8:5

2 A

M

1:5

2:2

3 A

M

1:5

5:5

3 A

M

1:5

9:2

4 A

M

2:0

2:5

4 A

M

2:0

6:2

6 A

M

2:0

9:5

7 A

M

2:1

3:2

9 A

M

2:1

7:0

2 A

M

2:2

0:3

7 A

M

2:2

4:1

4 A

M

2:2

7:5

0 A

M

2:3

1:2

6 A

M

2:3

5:0

3 A

M

2:3

8:3

8 A

M

2:4

2:1

1 A

M

2:4

5:4

2 A

M

2:4

9:1

2 A

M

2:5

2:4

2 A

M

\\vsphere2.cvspex.rtp.lab.emc.com\Memory\NonKernel MBytes

0

200

400

600

800

1000

1200

1400

1:0

3:3

6 A

M

1:0

8:5

4 A

M

1:1

4:1

3 A

M

1:1

9:3

6 A

M

1:2

5:0

1 A

M

1:3

0:2

6 A

M

1:3

5:5

3 A

M

1:4

1:1

9 A

M

1:4

6:4

6 A

M

1:5

2:1

2 A

M

1:5

7:3

8 A

M

2:0

3:0

5 A

M

2:0

8:3

2 A

M

2:1

4:0

0 A

M

2:1

9:3

3 A

M

2:2

5:0

9 A

M

2:3

0:4

3 A

M

2:3

6:1

8 A

M

2:4

1:5

0 A

M

2:4

7:1

6 A

M

2:5

2:4

2 A

M

\\vsphere2.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108876:vmnic0)\MBitsReceived/sec

\\vsphere2.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108876:vmnic0)\MBitsTransmitted/sec

\\vsphere2.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108877:vmnic1)\MBitsReceived/sec

\\vsphere2.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108877:vmnic1)\MBitsTransmitted/sec

Page 247: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

XenDesktop 7.5 RDS Hosted Shared Desktop Single Server Maximum

Recommended Workload The maximum recommended workload for a B200 M3 blade server with dual E5-2680 v2 processors and 256GB of

RAM is 200 Server 2012 R2 Hosted Shared Desktops. Each blade server ran 8 Server 2012 R2 Virtual Machines.

Each virtual server was configured with 5 vCPUs and 24GB RAM

Figure 36: Login VSI Results for 200 XenDesktop 7 RDS Hosted Shared Desktop Sessions.

Performance data for the server running the workload follows:

Page 248: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 37: XENDESKTOPRDS-01 Hosted Shared Desktop Server CPU Utilization

Figure 38: XENDESKTOPRDS-01 Hosted Shared Desktop Server Memory Utilization

0

10

20

30

40

50

60

70

80

90

100

2:3

6:3

3 P

M

2:3

9:2

6 P

M

2:4

2:2

0 P

M

2:4

5:1

3 P

M

2:4

8:0

6 P

M

2:5

1:0

0 P

M

2:5

3:5

3 P

M

2:5

6:4

6 P

M

2:5

9:4

0 P

M

3:0

2:3

3 P

M

3:0

5:2

6 P

M

3:0

8:1

9 P

M

3:1

1:1

2 P

M

3:1

4:0

6 P

M

3:1

6:5

9 P

M

3:1

9:5

2 P

M

3:2

2:4

6 P

M

3:2

5:4

0 P

M

3:2

8:3

4 P

M

3:3

1:2

8 P

M

3:3

4:2

3 P

M

3:3

7:1

7 P

M

3:4

0:1

2 P

M

3:4

3:0

6 P

M

3:4

6:0

1 P

M

3:4

8:5

5 P

M

3:5

1:4

9 P

M

3:5

4:4

2 P

M

3:5

7:3

6 P

M

4:0

0:2

9 P

M

4:0

3:2

2 P

M

4:0

6:1

5 P

M

4:0

9:0

8 P

M

\\vsphere4.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Core Util Time

0

50000

100000

150000

200000

250000

2:3

6:3

3 P

M

2:3

9:2

6 P

M

2:4

2:2

0 P

M

2:4

5:1

3 P

M

2:4

8:0

6 P

M

2:5

1:0

0 P

M

2:5

3:5

3 P

M

2:5

6:4

6 P

M

2:5

9:4

0 P

M

3:0

2:3

3 P

M

3:0

5:2

6 P

M

3:0

8:1

9 P

M

3:1

1:1

2 P

M

3:1

4:0

6 P

M

3:1

6:5

9 P

M

3:1

9:5

2 P

M

3:2

2:4

6 P

M

3:2

5:4

0 P

M

3:2

8:3

4 P

M

3:3

1:2

8 P

M

3:3

4:2

3 P

M

3:3

7:1

7 P

M

3:4

0:1

2 P

M

3:4

3:0

6 P

M

3:4

6:0

1 P

M

3:4

8:5

5 P

M

3:5

1:4

9 P

M

3:5

4:4

2 P

M

3:5

7:3

6 P

M

4:0

0:2

9 P

M

4:0

3:2

2 P

M

4:0

6:1

5 P

M

4:0

9:0

8 P

M

\\vsphere4.cvspex.rtp.lab.emc.com\Memory\NonKernel MBytes

Page 249: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 39: XENDESKTOPRDS-01 Hosted Shared Desktop Server Network Utilization

Full Scale Mixed Workload XenDesktop 7.5 Hosted Virtual and

RDS Hosted Shared Desktops The combined mixed workload for the study was 1000 seats. To achieve the target, we launched the sessions against

both clusters concurrently. We specify in the Cisco Test Protocol for XenDesktop described in Section 8 above that

all sessions must be launched within 30 minutes and that all launched sessions must become active within 32

minutes.

The configured system efficiently and effectively delivered the following results. (Note: Appendix B contains

performance charts for all eight blades in one of three scale test runs.)

0

100

200

300

400

500

6002

:36

:33

PM

2:4

0:5

8 P

M

2:4

5:2

3 P

M

2:4

9:4

8 P

M

2:5

4:1

3 P

M

2:5

8:3

8 P

M

3:0

3:0

4 P

M

3:0

7:2

8 P

M

3:1

1:5

3 P

M

3:1

6:1

8 P

M

3:2

0:4

3 P

M

3:2

5:0

9 P

M

3:2

9:3

5 P

M

3:3

4:0

2 P

M

3:3

8:2

9 P

M

3:4

2:5

6 P

M

3:4

7:2

3 P

M

3:5

1:4

9 P

M

3:5

6:1

4 P

M

4:0

0:3

9 P

M

4:0

5:0

4 P

M

\\vsphere4.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108881:vmnic0)\MBitsReceived/sec

\\vsphere4.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108881:vmnic0)\MBitsTransmitted/sec

\\vsphere4.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108882:vmnic1)\MBitsReceived/sec

\\vsphere4.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108882:vmnic1)\MBitsTransmitted/sec

Page 250: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 40: 1000 User Mixed Workload XenDesktop 7.5 Login VSI End User Experience Graph

Figure 41: Representative UCS B200 M3 XenDesktop 7.5 HVD Blade CPU Utilization

0

10

20

30

40

50

60

70

80

90

9:1

3:2

9 P

M9

:16

:43

PM

9:1

9:5

7 P

M9

:23

:13

PM

9:2

6:2

9 P

M9

:29

:46

PM

9:3

3:0

3 P

M9

:36

:20

PM

9:3

9:3

7 P

M9

:42

:54

PM

9:4

6:1

1 P

M9

:49

:29

PM

9:5

2:4

6 P

M9

:56

:03

PM

9:5

9:2

1 P

M1

0:0

2:3

8 P

M1

0:0

5:5

5 P

M1

0:0

9:1

2 P

M1

0:1

2:2

9 P

M1

0:1

5:4

7 P

M1

0:1

9:0

4 P

M1

0:2

2:2

2 P

M1

0:2

5:3

9 P

M1

0:2

8:5

7 P

M1

0:3

2:1

4 P

M1

0:3

5:3

2 P

M1

0:3

8:4

9 P

M1

0:4

2:0

6 P

M1

0:4

5:2

3 P

M1

0:4

8:4

1 P

M1

0:5

1:5

8 P

M1

0:5

5:1

5 P

M1

0:5

8:3

2 P

M1

1:0

1:4

9 P

M1

1:0

5:0

6 P

M

\\vsphere2.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Core Util Time

Page 251: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 42: Representative UCS B200 M3 XenDesktop 7.5 HVD Blade Memory Utilization

Figure 43: Representative UCS B200 M3 XenDesktop 7.5 HVD Blade Network Utilization

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

9:1

3:2

9 P

M

9:1

7:0

3 P

M

9:2

0:3

8 P

M

9:2

4:1

5 P

M

9:2

7:5

2 P

M

9:3

1:2

9 P

M

9:3

5:0

7 P

M

9:3

8:4

5 P

M

9:4

2:2

3 P

M

9:4

6:0

1 P

M

9:4

9:3

9 P

M

9:5

3:1

7 P

M

9:5

6:5

5 P

M

10

:00

:33

PM

10

:04

:11

PM

10

:07

:49

PM

10

:11

:27

PM

10

:15

:05

PM

10

:18

:43

PM

10

:22

:22

PM

10

:26

:00

PM

10

:29

:38

PM

10

:33

:16

PM

10

:36

:55

PM

10

:40

:33

PM

10

:44

:11

PM

10

:47

:49

PM

10

:51

:27

PM

10

:55

:04

PM

10

:58

:42

PM

11

:02

:20

PM

11

:05

:58

PM

\\vsphere2.cvspex.rtp.lab.emc.com\Memory\NonKernel MBytes

0

200

400

600

800

1000

1200

1400

1600

1800

9:1

3:2

9 P

M9

:18

:56

PM

9:2

4:2

5 P

M9

:29

:56

PM

9:3

5:2

8 P

M9

:41

:00

PM

9:4

6:3

2 P

M9

:52

:05

PM

9:5

7:3

7 P

M1

0:0

3:0

9 P

M1

0:0

8:4

1 P

M1

0:1

4:1

3 P

M1

0:1

9:4

5 P

M1

0:2

5:1

8 P

M1

0:3

0:5

1 P

M1

0:3

6:2

4 P

M1

0:4

1:5

6 P

M1

0:4

7:2

8 P

M1

0:5

3:0

0 P

M1

0:5

8:3

2 P

M1

1:0

4:0

4 P

M

\\vsphere2.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108876:vmnic0)\MBitsReceived/sec

\\vsphere2.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108876:vmnic0)\MBitsTransmitted/sec

\\vsphere2.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108877:vmnic1)\MBitsReceived/sec

\\vsphere2.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108877:vmnic1)\MBitsTransmitted/sec

Page 252: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 44: Representative UCS B200 M3 XenDesktop 7.5 RDS HSD Blade CPU Utilization

Figure 45: Representative UCS B200 M3 XenDesktop 7.5 RDS HSD Blade Memory Utilization

0

10

20

30

40

50

60

70

80

90

100

9:1

3:3

1 P

M

9:1

6:5

4 P

M

9:2

0:1

8 P

M

9:2

3:4

2 P

M

9:2

7:0

5 P

M

9:3

0:2

9 P

M

9:3

3:5

3 P

M

9:3

7:1

7 P

M

9:4

0:4

1 P

M

9:4

4:0

4 P

M

9:4

7:2

8 P

M

9:5

0:5

2 P

M

9:5

4:1

6 P

M

9:5

7:4

0 P

M

10

:01

:03

PM

10

:04

:27

PM

10

:07

:50

PM

10

:11

:14

PM

10

:14

:38

PM

10

:18

:02

PM

10

:21

:26

PM

10

:24

:49

PM

10

:28

:13

PM

10

:31

:38

PM

10

:35

:02

PM

10

:38

:26

PM

10

:41

:50

PM

10

:45

:14

PM

10

:48

:38

PM

10

:52

:02

PM

10

:55

:26

PM

10

:58

:49

PM

11

:02

:13

PM

11

:05

:37

PM

\\vsphere4.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Core Util Time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

9:1

3:3

1 P

M

9:1

7:0

4 P

M

9:2

0:3

8 P

M

9:2

4:1

2 P

M

9:2

7:4

6 P

M

9:3

1:2

0 P

M

9:3

4:5

4 P

M

9:3

8:2

8 P

M

9:4

2:0

2 P

M

9:4

5:3

6 P

M

9:4

9:1

0 P

M

9:5

2:4

4 P

M

9:5

6:1

8 P

M

9:5

9:5

2 P

M

10

:03

:26

PM

10

:07

:00

PM

10

:10

:33

PM

10

:14

:07

PM

10

:17

:41

PM

10

:21

:15

PM

10

:24

:49

PM

10

:28

:24

PM

10

:31

:58

PM

10

:35

:33

PM

10

:39

:07

PM

10

:42

:41

PM

10

:46

:16

PM

10

:49

:50

PM

10

:53

:24

PM

10

:56

:57

PM

11

:00

:31

PM

11

:04

:05

PM

11

:07

:39

PM

\\vsphere4.cvspex.rtp.lab.emc.com\Memory\NonKernel MBytes

Page 253: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 46: Representative UCS B200 M3 XenDesktop 7.5 RDS HSD Blade Network Utilization

Key EMC VNX5400 Performance Metrics During Scale Testing Key performance metrics were captured on the EMC storage during the full scale testing.

Table 10. Test cases

Worklo

ad

Test Cases

Boot All 300 HVD and 32 RDS virtual machines at the same time.

Login The test assumed one user logging in and beginning work every 1.8 seconds until the

maximum of 1000 users were reached at which point “steady state” was assumed.

Steady

state

The steady state workload all users performed various tasks using Microsoft Office, Web

browsing, PDF printing, playing Flash videos, and using the freeware mind mapper

application.

Logoff Logoff all 1000 users at the same time.

Storage used for the tests including:

VNX5400 unified storage

Five shelves contain a mix of 100GB SSD, 600GB SAS 15K RPM, and 2TB NL-SAS 7.2K RPM drives

VNX File 8.1.2-51 and Block 05.33.000.5.051.

10GbE storage network for NFS, CIFS, iSCSI, and TFTP

0

100

200

300

400

500

6009

:13

:31

PM

9:1

9:0

7 P

M

9:2

4:4

3 P

M

9:3

0:1

9 P

M

9:3

5:5

5 P

M

9:4

1:3

2 P

M

9:4

7:0

8 P

M

9:5

2:4

4 P

M

9:5

8:2

0 P

M

10

:03

:56

PM

10

:09

:32

PM

10

:15

:08

PM

10

:20

:45

PM

10

:26

:21

PM

10

:31

:58

PM

10

:37

:35

PM

10

:43

:12

PM

10

:48

:49

PM

10

:54

:25

PM

11

:00

:01

PM

11

:05

:37

PM

\\vsphere4.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108881:vmnic0)\MBitsReceived/sec

\\vsphere4.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108881:vmnic0)\MBitsTransmitted/sec

\\vsphere4.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108882:vmnic1)\MBitsReceived/sec

\\vsphere4.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108882:vmnic1)\MBitsTransmitted/sec

Page 254: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Performance Result

Findings

EMC FAST Cache decreases IOPS during boot and login phase

Storage can easily handle the 1000 user virtual desktop workload with average less than 3 ms read latency

and less than 1 ms write latency

Citrix UPM exclusion rule is essential to lower user login IOPS and login time

Boot time is seven minutes and login time for 1000 user is 30 min consistently

Table 11. Shows 1000 user CIFS workload.

read ops read latency us

write ops write latency us

Boot 75 868 0 112

Login 565 540 650 263

Steady 841 606 1077 364

Logoff 275 1876 295 443

Table 12. Shows 1000 desktop users’ average IOPS during boot, login, steady state and log off.

read ops/s

read latency us write ops/s

write latency us

Boot 24 2395 1049 428

Login 45 961 4728 791

Steady 23 1326 4108 824

Logoff 103 1279 3448 780

Table 13. Shows average CPU on the storage processors during boot, login, steady state and log off

SP A SP B

Boot 6% 5%

Login 20% 19%

Steady 22% 21%

Logoff 17% 16%

Citrix PVS Workload Characteristics The vast majority of workload generated by PVS is to the write cache storage. Comparatively, the read operations

constitute very little of the total I/O except at the beginning of the VM boot process. After the initial boot reads to

the OS vDisk are mostly served from the PVS server’s cache.

The write portion of the workload includes all the OS and application-level changes that the VMs incur. The entire

PVS solution is approximately 90% write from the storage’s perspective in all cases.

Page 255: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

With non-persistent pooled desktops, the PVS write cache comprises the rest of the I/O. The storage that contains

the read-only OS vDisk incurred almost no I/O activity after initial boot and averaged zero IOPs per desktop to the

storage (due to the PVS server cache). The write cache showed a peak average of 10 IOPs per desktop during the

login storm, with the steady state showing 15-20% fewer I/Os in all configurations. The write cache workload op

size averaged 8k, with 90% of the workload being writes.

The addition of CIFS profile management had an effect of taking some workload off of the write cache. LoginVSI

tests showed three IOPs per desktop were removed from write cache and served from CIFS. The additional four

IOPs per desktop seen on the CIFS side were composed of metadata operations (open/close, getattr, lock).

Sizing for CIFS home directories should be done as a separate workload from the virtual desktop workload. The

storage resource needs for CIFS home directories will be highly variable and dependent on the needs of the users

and applications in the environment.

Key Infrastructure Server Performance Metrics During Scale

Testing It is important to verify that key infrastructure servers are performing optimally during the scale test run. The

following performance parameters were collected and charted.

They validate that the designed infrastructure supports the mixed workload.

Figure 47: Active Directory Domain Controller CPU Utilization

0

10

20

30

40

50

60

70

80

90

100

5:1

3:3

6 P

M

5:1

7:0

1 P

M

5:2

0:2

6 P

M

5:2

3:5

1 P

M

5:2

7:1

6 P

M

5:3

0:4

1 P

M

5:3

4:0

6 P

M

5:3

7:3

1 P

M

5:4

0:5

6 P

M

5:4

4:2

1 P

M

5:4

7:4

6 P

M

5:5

1:1

2 P

M

5:5

4:3

7 P

M

5:5

8:0

2 P

M

6:0

1:2

7 P

M

6:0

4:5

2 P

M

6:0

8:1

7 P

M

6:1

1:4

2 P

M

6:1

5:0

7 P

M

6:1

8:3

2 P

M

6:2

1:5

7 P

M

6:2

5:2

2 P

M

6:2

8:4

7 P

M

6:3

2:1

2 P

M

6:3

5:3

7 P

M

6:3

9:0

2 P

M

6:4

2:2

7 P

M

6:4

5:5

2 P

M

6:4

9:1

7 P

M

6:5

2:4

2 P

M

6:5

6:0

7 P

M

6:5

9:3

2 P

M

7:0

2:5

7 P

M

7:0

6:2

2 P

M

\\172.16.64.1\Processor(_Total)\% Processor Time

Page 256: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 48: Active Directory Domain Controller Memory Utilization

Figure 49: Active Directory Domain Controller Network Utilization

3135

3140

3145

3150

3155

3160

3165

3170

3175

5:1

3:3

6 P

M5

:17

:01

PM

5:2

0:2

6 P

M5

:23

:51

PM

5:2

7:1

6 P

M5

:30

:41

PM

5:3

4:0

6 P

M5

:37

:31

PM

5:4

0:5

6 P

M5

:44

:21

PM

5:4

7:4

6 P

M5

:51

:12

PM

5:5

4:3

7 P

M5

:58

:02

PM

6:0

1:2

7 P

M6

:04

:52

PM

6:0

8:1

7 P

M6

:11

:42

PM

6:1

5:0

7 P

M6

:18

:32

PM

6:2

1:5

7 P

M6

:25

:22

PM

6:2

8:4

7 P

M6

:32

:12

PM

6:3

5:3

7 P

M6

:39

:02

PM

6:4

2:2

7 P

M6

:45

:52

PM

6:4

9:1

7 P

M6

:52

:42

PM

6:5

6:0

7 P

M6

:59

:32

PM

7:0

2:5

7 P

M7

:06

:22

PM

\\172.16.64.1\Memory\Available MBytes

0

100000

200000

300000

400000

500000

600000

700000

5:1

3:3

6 P

M5

:17

:06

PM

5:2

0:3

6 P

M

5:2

4:0

6 P

M5

:27

:36

PM

5:3

1:0

6 P

M

5:3

4:3

6 P

M5

:38

:06

PM

5:4

1:3

6 P

M

5:4

5:0

6 P

M5

:48

:37

PM

5:5

2:0

7 P

M

5:5

5:3

7 P

M5

:59

:07

PM

6:0

2:3

7 P

M

6:0

6:0

7 P

M6

:09

:37

PM

6:1

3:0

7 P

M

6:1

6:3

7 P

M6

:20

:07

PM

6:2

3:3

7 P

M

6:2

7:0

7 P

M6

:30

:37

PM

6:3

4:0

7 P

M

6:3

7:3

7 P

M6

:41

:07

PM

6:4

4:3

7 P

M

6:4

8:0

7 P

M6

:51

:37

PM

6:5

5:0

7 P

M

6:5

8:3

7 P

M7

:02

:07

PM

7:0

5:3

7 P

M

\\172.16.64.1\Network Interface(vmxnet3 Ethernet Adapter)\Bytes Received/sec

\\172.16.64.1\Network Interface(vmxnet3 Ethernet Adapter)\Bytes Sent/sec

Page 257: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 50: Active Directory Domain Controller Disk Queue Lengths

Figure 51: Active Directory Domain Controller Disk IO Operations

0

0.2

0.4

0.6

0.8

1

1.25

:13

:36

PM

5:1

6:5

6 P

M5

:20

:16

PM

5:2

3:3

6 P

M5

:26

:56

PM

5:3

0:1

6 P

M5

:33

:36

PM

5:3

6:5

6 P

M5

:40

:16

PM

5:4

3:3

6 P

M5

:46

:56

PM

5:5

0:1

7 P

M5

:53

:37

PM

5:5

6:5

7 P

M6

:00

:17

PM

6:0

3:3

7 P

M6

:06

:57

PM

6:1

0:1

7 P

M6

:13

:37

PM

6:1

6:5

7 P

M6

:20

:17

PM

6:2

3:3

7 P

M6

:26

:57

PM

6:3

0:1

7 P

M6

:33

:37

PM

6:3

6:5

7 P

M6

:40

:17

PM

6:4

3:3

7 P

M6

:46

:57

PM

6:5

0:1

7 P

M6

:53

:37

PM

6:5

6:5

7 P

M7

:00

:17

PM

7:0

3:3

7 P

M7

:06

:57

PM

\\172.16.64.1\PhysicalDisk(0 C:)\Current Disk Queue Length

\\172.16.64.1\PhysicalDisk(0 C:)\Avg. Disk Queue Length

\\172.16.64.1\PhysicalDisk(0 C:)\Avg. Disk Read Queue Length

\\172.16.64.1\PhysicalDisk(0 C:)\Avg. Disk Write Queue Length

0

20

40

60

80

100

120

140

1

42

83

12

4

16

5

20

6

24

7

28

8

32

9

37

0

41

1

45

2

49

3

53

4

57

5

61

6

65

7

69

8

73

9

78

0

82

1

86

2

90

3

94

4

98

5

10

26

10

67

11

08

11

49

11

90

12

31

12

72

13

13

13

54

\\172.16.64.1\PhysicalDisk(0 C:)\Disk Transfers/sec

\\172.16.64.1\PhysicalDisk(0 C:)\Disk Reads/sec

\\172.16.64.1\PhysicalDisk(0 C:)\Disk Writes/sec

Page 258: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 52: vCenter Server CPU Utilization

Figure 53: vCenter Server Memory Utilization

0

10

20

30

40

50

60

70

80

90

100

5:1

3:3

4 P

M

5:1

6:5

9 P

M

5:2

0:2

4 P

M

5:2

3:4

9 P

M

5:2

7:1

4 P

M

5:3

0:3

9 P

M

5:3

4:0

4 P

M

5:3

7:2

9 P

M

5:4

0:5

4 P

M

5:4

4:1

9 P

M

5:4

7:4

4 P

M

5:5

1:0

9 P

M

5:5

4:3

4 P

M

5:5

7:5

9 P

M

6:0

1:2

4 P

M

6:0

4:4

9 P

M

6:0

8:1

4 P

M

6:1

1:3

9 P

M

6:1

5:0

4 P

M

6:1

8:2

9 P

M

6:2

1:5

4 P

M

6:2

5:1

9 P

M

6:2

8:4

4 P

M

6:3

2:0

9 P

M

6:3

5:3

4 P

M

6:3

8:5

9 P

M

6:4

2:2

4 P

M

6:4

5:4

9 P

M

6:4

9:1

4 P

M

6:5

2:3

9 P

M

6:5

6:0

4 P

M

6:5

9:2

9 P

M

7:0

2:5

4 P

M

7:0

6:1

9 P

M

\\172.16.64.4\Processor(_Total)\% Processor Time

2400

2500

2600

2700

2800

2900

3000

5:1

3:3

4 P

M5

:16

:59

PM

5:2

0:2

4 P

M5

:23

:49

PM

5:2

7:1

4 P

M5

:30

:39

PM

5:3

4:0

4 P

M5

:37

:29

PM

5:4

0:5

4 P

M5

:44

:19

PM

5:4

7:4

4 P

M5

:51

:09

PM

5:5

4:3

4 P

M5

:57

:59

PM

6:0

1:2

4 P

M6

:04

:49

PM

6:0

8:1

4 P

M6

:11

:39

PM

6:1

5:0

4 P

M6

:18

:29

PM

6:2

1:5

4 P

M6

:25

:19

PM

6:2

8:4

4 P

M6

:32

:09

PM

6:3

5:3

4 P

M6

:38

:59

PM

6:4

2:2

4 P

M6

:45

:49

PM

6:4

9:1

4 P

M6

:52

:39

PM

6:5

6:0

4 P

M6

:59

:29

PM

7:0

2:5

4 P

M7

:06

:19

PM

\\172.16.64.4\Memory\Available MBytes

Page 259: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 54: vCenter Server Network Utilization

Figure 55: vCenter Server Disk Queue Lengths

0

200000

400000

600000

800000

1000000

1200000

1400000

1600000

1800000

2000000

5:1

3:3

4 P

M5

:17

:04

PM

5:2

0:3

4 P

M5

:24

:04

PM

5:2

7:3

4 P

M5

:31

:04

PM

5:3

4:3

4 P

M5

:38

:04

PM

5:4

1:3

4 P

M5

:45

:04

PM

5:4

8:3

4 P

M5

:52

:04

PM

5:5

5:3

4 P

M5

:59

:04

PM

6:0

2:3

4 P

M6

:06

:04

PM

6:0

9:3

4 P

M6

:13

:04

PM

6:1

6:3

4 P

M6

:20

:04

PM

6:2

3:3

4 P

M6

:27

:04

PM

6:3

0:3

4 P

M6

:34

:04

PM

6:3

7:3

4 P

M6

:41

:04

PM

6:4

4:3

4 P

M6

:48

:04

PM

6:5

1:3

4 P

M6

:55

:04

PM

6:5

8:3

4 P

M7

:02

:04

PM

7:0

5:3

4 P

M

\\172.16.64.4\Network Interface(vmxnet3 Ethernet Adapter)\Bytes Received/sec

\\172.16.64.4\Network Interface(vmxnet3 Ethernet Adapter)\Bytes Sent/sec

0

0.5

1

1.5

2

2.5

5:1

3:3

4 P

M5

:16

:54

PM

5:2

0:1

4 P

M5

:23

:34

PM

5:2

6:5

4 P

M5

:30

:14

PM

5:3

3:3

4 P

M5

:36

:54

PM

5:4

0:1

4 P

M5

:43

:34

PM

5:4

6:5

4 P

M5

:50

:14

PM

5:5

3:3

4 P

M5

:56

:54

PM

6:0

0:1

4 P

M6

:03

:34

PM

6:0

6:5

4 P

M6

:10

:14

PM

6:1

3:3

4 P

M6

:16

:54

PM

6:2

0:1

4 P

M6

:23

:34

PM

6:2

6:5

4 P

M6

:30

:14

PM

6:3

3:3

4 P

M6

:36

:54

PM

6:4

0:1

4 P

M6

:43

:34

PM

6:4

6:5

4 P

M6

:50

:14

PM

6:5

3:3

4 P

M6

:56

:54

PM

7:0

0:1

4 P

M7

:03

:34

PM

7:0

6:5

4 P

M

\\172.16.64.4\PhysicalDisk(0 C:)\Current Disk Queue Length

\\172.16.64.4\PhysicalDisk(0 C:)\Avg. Disk Queue Length

\\172.16.64.4\PhysicalDisk(0 C:)\Avg. Disk Read Queue Length

\\172.16.64.4\PhysicalDisk(0 C:)\Avg. Disk Write Queue Length

Page 260: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 56: vCenter Server Disk IO Operations

Figure 57: XDSQL1 SQL Server CPU Utilization

0

200

400

600

800

1000

1200

14005

:13

:34

PM

5:1

6:5

9 P

M5

:20

:24

PM

5:2

3:4

9 P

M5

:27

:14

PM

5:3

0:3

9 P

M5

:34

:04

PM

5:3

7:2

9 P

M5

:40

:54

PM

5:4

4:1

9 P

M5

:47

:44

PM

5:5

1:0

9 P

M5

:54

:34

PM

5:5

7:5

9 P

M6

:01

:24

PM

6:0

4:4

9 P

M6

:08

:14

PM

6:1

1:3

9 P

M6

:15

:04

PM

6:1

8:2

9 P

M6

:21

:54

PM

6:2

5:1

9 P

M6

:28

:44

PM

6:3

2:0

9 P

M6

:35

:34

PM

6:3

8:5

9 P

M6

:42

:24

PM

6:4

5:4

9 P

M6

:49

:14

PM

6:5

2:3

9 P

M6

:56

:04

PM

6:5

9:2

9 P

M7

:02

:54

PM

7:0

6:1

9 P

M

\\172.16.64.4\PhysicalDisk(0 C:)\Disk Transfers/sec

\\172.16.64.4\PhysicalDisk(0 C:)\Disk Reads/sec

\\172.16.64.4\PhysicalDisk(0 C:)\Disk Writes/sec

0

10

20

30

40

50

60

70

80

90

100

5:1

3:3

5 P

M

5:1

7:0

0 P

M

5:2

0:2

5 P

M

5:2

3:5

0 P

M

5:2

7:1

5 P

M

5:3

0:4

0 P

M

5:3

4:0

5 P

M

5:3

7:3

0 P

M

5:4

0:5

5 P

M

5:4

4:2

0 P

M

5:4

7:4

5 P

M

5:5

1:1

0 P

M

5:5

4:3

5 P

M

5:5

8:0

0 P

M

6:0

1:2

5 P

M

6:0

4:5

0 P

M

6:0

8:1

5 P

M

6:1

1:4

0 P

M

6:1

5:0

5 P

M

6:1

8:3

0 P

M

6:2

1:5

5 P

M

6:2

5:2

0 P

M

6:2

8:4

5 P

M

6:3

2:1

0 P

M

6:3

5:3

5 P

M

6:3

9:0

0 P

M

6:4

2:2

6 P

M

6:4

5:5

1 P

M

6:4

9:1

6 P

M

6:5

2:4

1 P

M

6:5

6:0

6 P

M

6:5

9:3

1 P

M

7:0

2:5

6 P

M

7:0

6:2

1 P

M

\\172.16.64.3\Processor(_Total)\% Processor Time

Page 261: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 58: XDSQL1 SQL Server Memory Utilization

Figure 59: XDSQL1 SQL Server Network Utilization

3550

3560

3570

3580

3590

3600

3610

3620

3630

3640

5:1

3:3

5 P

M5

:17

:00

PM

5:2

0:2

5 P

M5

:23

:50

PM

5:2

7:1

5 P

M5

:30

:40

PM

5:3

4:0

5 P

M5

:37

:30

PM

5:4

0:5

5 P

M5

:44

:20

PM

5:4

7:4

5 P

M5

:51

:10

PM

5:5

4:3

5 P

M5

:58

:00

PM

6:0

1:2

5 P

M6

:04

:50

PM

6:0

8:1

5 P

M

6:1

1:4

0 P

M6

:15

:05

PM

6:1

8:3

0 P

M6

:21

:55

PM

6:2

5:2

0 P

M6

:28

:45

PM

6:3

2:1

0 P

M6

:35

:35

PM

6:3

9:0

0 P

M6

:42

:26

PM

6:4

5:5

1 P

M6

:49

:16

PM

6:5

2:4

1 P

M6

:56

:06

PM

6:5

9:3

1 P

M7

:02

:56

PM

7:0

6:2

1 P

M

\\172.16.64.3\Memory\Available MBytes

0

200000

400000

600000

800000

1000000

1200000

5:1

3:3

5 P

M5

:17

:05

PM

5:2

0:3

5 P

M5

:24

:05

PM

5:2

7:3

5 P

M5

:31

:05

PM

5:3

4:3

5 P

M5

:38

:05

PM

5:4

1:3

5 P

M5

:45

:05

PM

5:4

8:3

5 P

M5

:52

:05

PM

5:5

5:3

5 P

M5

:59

:05

PM

6:0

2:3

5 P

M6

:06

:05

PM

6:0

9:3

5 P

M6

:13

:05

PM

6:1

6:3

5 P

M6

:20

:05

PM

6:2

3:3

5 P

M6

:27

:05

PM

6:3

0:3

5 P

M6

:34

:05

PM

6:3

7:3

5 P

M6

:41

:06

PM

6:4

4:3

6 P

M6

:48

:06

PM

6:5

1:3

6 P

M6

:55

:06

PM

6:5

8:3

6 P

M7

:02

:06

PM

7:0

5:3

6 P

M

\\172.16.64.3\Network Interface(vmxnet3 Ethernet Adapter)\Bytes Received/sec

\\172.16.64.3\Network Interface(vmxnet3 Ethernet Adapter)\Bytes Sent/sec

Page 262: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 60: XDSQL1 SQL Server Disk Queue Lengths

Figure 61: XDSQL1 SQL Server Disk IO Operations

00.5

11.5

25

:13

:35

PM

5:1

6:5

5 P

M5

:20

:15

PM

5:2

3:3

5 P

M5

:26

:55

PM

5:3

0:1

5 P

M5

:33

:35

PM

5:3

6:5

5 P

M5

:40

:15

PM

5:4

3:3

5 P

M5

:46

:55

PM

5:5

0:1

5 P

M5

:53

:35

PM

5:5

6:5

5 P

M6

:00

:15

PM

6:0

3:3

5 P

M6

:06

:55

PM

6:1

0:1

5 P

M6

:13

:35

PM

6:1

6:5

5 P

M6

:20

:15

PM

6:2

3:3

5 P

M6

:26

:55

PM

6:3

0:1

5 P

M6

:33

:35

PM

6:3

6:5

5 P

M6

:40

:16

PM

6:4

3:3

6 P

M6

:46

:56

PM

6:5

0:1

6 P

M6

:53

:36

PM

6:5

6:5

6 P

M7

:00

:16

PM

7:0

3:3

6 P

M7

:06

:56

PM

\\172.16.64.3\PhysicalDisk(0 C:)\Current Disk Queue Length

\\172.16.64.3\PhysicalDisk(0 C:)\Avg. Disk Queue Length

\\172.16.64.3\PhysicalDisk(0 C:)\Avg. Disk Read Queue Length

\\172.16.64.3\PhysicalDisk(0 C:)\Avg. Disk Write Queue Length

\\172.16.64.3\PhysicalDisk(1 E:)\Current Disk Queue Length

\\172.16.64.3\PhysicalDisk(1 E:)\Avg. Disk Queue Length

\\172.16.64.3\PhysicalDisk(1 E:)\Avg. Disk Read Queue Length

\\172.16.64.3\PhysicalDisk(1 E:)\Avg. Disk Write Queue Length

0

100

200

300

400

500

600

1

42

83

12

4

16

5

20

6

24

7

28

8

32

9

37

0

41

1

45

2

49

3

53

4

57

5

61

6

65

7

69

8

73

9

78

0

82

1

86

2

90

3

94

4

98

5

10

26

10

67

11

08

11

49

11

90

12

31

12

72

13

13

13

54

\\172.16.64.3\PhysicalDisk(0 C:)\Disk Transfers/sec

\\172.16.64.3\PhysicalDisk(0 C:)\Disk Reads/sec

\\172.16.64.3\PhysicalDisk(0 C:)\Disk Writes/sec

\\172.16.64.3\PhysicalDisk(1 E:)\Disk Transfers/sec

\\172.16.64.3\PhysicalDisk(1 E:)\Disk Reads/sec

\\172.16.64.3\PhysicalDisk(1 E:)\Disk Writes/sec

Page 263: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 62: XENPVS1 Provisioning Server CPU Utilization

Figure 63: XENPVS1 Provisioning Server Memory Utilization

0

10

20

30

40

50

60

70

80

90

100

5:1

3:3

7 P

M

5:1

7:0

2 P

M

5:2

0:2

7 P

M

5:2

3:5

2 P

M

5:2

7:1

7 P

M

5:3

0:4

2 P

M

5:3

4:0

7 P

M

5:3

7:3

2 P

M

5:4

0:5

7 P

M

5:4

4:2

2 P

M

5:4

7:4

7 P

M

5:5

1:1

2 P

M

5:5

4:3

7 P

M

5:5

8:0

2 P

M

6:0

1:2

7 P

M

6:0

4:5

2 P

M

6:0

8:1

7 P

M

6:1

1:4

2 P

M

6:1

5:0

7 P

M

6:1

8:3

2 P

M

6:2

1:5

7 P

M

6:2

5:2

2 P

M

6:2

8:4

7 P

M

6:3

2:1

2 P

M

6:3

5:3

7 P

M

6:3

9:0

2 P

M

6:4

2:2

7 P

M

6:4

5:5

2 P

M

6:4

9:1

7 P

M

6:5

2:4

2 P

M

6:5

6:0

8 P

M

6:5

9:3

3 P

M

7:0

2:5

8 P

M

7:0

6:2

3 P

M

\\172.16.64.10\Processor(_Total)\% Processor Time

6000

6200

6400

6600

6800

7000

7200

7400

5:1

3:3

7 P

M5

:17

:02

PM

5:2

0:2

7 P

M5

:23

:52

PM

5:2

7:1

7 P

M5

:30

:42

PM

5:3

4:0

7 P

M5

:37

:32

PM

5:4

0:5

7 P

M5

:44

:22

PM

5:4

7:4

7 P

M5

:51

:12

PM

5:5

4:3

7 P

M5

:58

:02

PM

6:0

1:2

7 P

M6

:04

:52

PM

6:0

8:1

7 P

M6

:11

:42

PM

6:1

5:0

7 P

M6

:18

:32

PM

6:2

1:5

7 P

M6

:25

:22

PM

6:2

8:4

7 P

M6

:32

:12

PM

6:3

5:3

7 P

M6

:39

:02

PM

6:4

2:2

7 P

M6

:45

:52

PM

6:4

9:1

7 P

M6

:52

:42

PM

6:5

6:0

8 P

M6

:59

:33

PM

7:0

2:5

8 P

M7

:06

:23

PM

\\172.16.64.10\Memory\Available MBytes

Page 264: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 64: XENPVS1 Provisioning Server Network Utilization

Figure 65: XENPVS1 Provisioning Server Disk Queue Lengths

0

50000000

100000000

150000000

200000000

250000000

300000000

350000000

5:1

3:3

7 P

M

5:1

7:1

7 P

M

5:2

0:5

7 P

M

5:2

4:3

7 P

M

5:2

8:1

7 P

M

5:3

1:5

7 P

M

5:3

5:3

7 P

M

5:3

9:1

7 P

M

5:4

2:5

7 P

M

5:4

6:3

7 P

M

5:5

0:1

7 P

M

5:5

3:5

7 P

M

5:5

7:3

7 P

M

6:0

1:1

7 P

M

6:0

4:5

7 P

M

6:0

8:3

7 P

M

6:1

2:1

7 P

M

6:1

5:5

7 P

M

6:1

9:3

7 P

M

6:2

3:1

7 P

M

6:2

6:5

7 P

M

6:3

0:3

7 P

M

6:3

4:1

7 P

M

6:3

7:5

7 P

M

6:4

1:3

7 P

M

6:4

5:1

7 P

M

6:4

8:5

7 P

M

6:5

2:3

7 P

M

6:5

6:1

8 P

M

6:5

9:5

8 P

M

7:0

3:3

8 P

M

7:0

7:1

8 P

M

\\172.16.64.10\Network Interface(vmxnet3 Ethernet Adapter)\Bytes Received/sec

\\172.16.64.10\Network Interface(vmxnet3 Ethernet Adapter)\Bytes Sent/sec

0

0.005

0.01

0.015

0.02

0.025

0.03

5:1

3:3

7 P

M5

:17

:02

PM

5:2

0:2

7 P

M5

:23

:52

PM

5:2

7:1

7 P

M5

:30

:42

PM

5:3

4:0

7 P

M5

:37

:32

PM

5:4

0:5

7 P

M5

:44

:22

PM

5:4

7:4

7 P

M5

:51

:12

PM

5:5

4:3

7 P

M5

:58

:02

PM

6:0

1:2

7 P

M6

:04

:52

PM

6:0

8:1

7 P

M6

:11

:42

PM

6:1

5:0

7 P

M6

:18

:32

PM

6:2

1:5

7 P

M6

:25

:22

PM

6:2

8:4

7 P

M6

:32

:12

PM

6:3

5:3

7 P

M6

:39

:02

PM

6:4

2:2

7 P

M6

:45

:52

PM

6:4

9:1

7 P

M6

:52

:42

PM

6:5

6:0

8 P

M6

:59

:33

PM

7:0

2:5

8 P

M7

:06

:23

PM

\\172.16.64.10\PhysicalDisk(0 C:)\Current Disk Queue Length

\\172.16.64.10\PhysicalDisk(0 C:)\Avg. Disk Queue Length

\\172.16.64.10\PhysicalDisk(0 C:)\Avg. Disk Read Queue Length

\\172.16.64.10\PhysicalDisk(0 C:)\Avg. Disk Write Queue Length

Page 265: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 66: XENPVS1 Provisioning Server Disk IO Operations

Figure 67: XENDESKTOP1 Broker Server CPU Utilization

0

0.5

1

1.5

2

2.5

35

:13

:37

PM

5:1

6:5

7 P

M5

:20

:17

PM

5:2

3:3

7 P

M5

:26

:57

PM

5:3

0:1

7 P

M5

:33

:37

PM

5:3

6:5

7 P

M5

:40

:17

PM

5:4

3:3

7 P

M5

:46

:57

PM

5:5

0:1

7 P

M5

:53

:37

PM

5:5

6:5

7 P

M6

:00

:17

PM

6:0

3:3

7 P

M6

:06

:57

PM

6:1

0:1

7 P

M6

:13

:37

PM

6:1

6:5

7 P

M6

:20

:17

PM

6:2

3:3

7 P

M6

:26

:57

PM

6:3

0:1

7 P

M6

:33

:37

PM

6:3

6:5

7 P

M6

:40

:17

PM

6:4

3:3

7 P

M6

:46

:57

PM

6:5

0:1

7 P

M6

:53

:37

PM

6:5

6:5

8 P

M7

:00

:18

PM

7:0

3:3

8 P

M7

:06

:58

PM

\\172.16.64.10\PhysicalDisk(0 C:)\Disk Transfers/sec

\\172.16.64.10\PhysicalDisk(0 C:)\Disk Reads/sec

\\172.16.64.10\PhysicalDisk(0 C:)\Disk Writes/sec

0

10

20

30

40

50

60

70

80

90

100

5:1

3:3

5 P

M

5:1

7:0

0 P

M

5:2

0:2

6 P

M

5:2

3:5

1 P

M

5:2

7:1

6 P

M

5:3

0:4

1 P

M

5:3

4:0

6 P

M

5:3

7:3

1 P

M

5:4

0:5

6 P

M

5:4

4:2

1 P

M

5:4

7:4

6 P

M

5:5

1:1

1 P

M

5:5

4:3

6 P

M

5:5

8:0

1 P

M

6:0

1:2

6 P

M

6:0

4:5

1 P

M

6:0

8:1

6 P

M

6:1

1:4

1 P

M

6:1

5:0

6 P

M

6:1

8:3

1 P

M

6:2

1:5

6 P

M

6:2

5:2

1 P

M

6:2

8:4

6 P

M

6:3

2:1

1 P

M

6:3

5:3

6 P

M

6:3

9:0

1 P

M

6:4

2:2

6 P

M

6:4

5:5

1 P

M

6:4

9:1

6 P

M

6:5

2:4

1 P

M

6:5

6:0

6 P

M

6:5

9:3

1 P

M

7:0

2:5

6 P

M

7:0

6:2

1 P

M

\\172.16.64.8\Processor(_Total)\% Processor Time

Page 266: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 68: XENDESKTOP1 Broker Server Memory Utilization

Figure 69: XENDESKTOP1 Broker Server Network Utilization

3600

3650

3700

3750

3800

3850

3900

3950

4000

4050

4100

5:1

3:3

5 P

M5

:17

:00

PM

5:2

0:2

6 P

M5

:23

:51

PM

5:2

7:1

6 P

M5

:30

:41

PM

5:3

4:0

6 P

M5

:37

:31

PM

5:4

0:5

6 P

M5

:44

:21

PM

5:4

7:4

6 P

M5

:51

:11

PM

5:5

4:3

6 P

M5

:58

:01

PM

6:0

1:2

6 P

M6

:04

:51

PM

6:0

8:1

6 P

M6

:11

:41

PM

6:1

5:0

6 P

M6

:18

:31

PM

6:2

1:5

6 P

M6

:25

:21

PM

6:2

8:4

6 P

M6

:32

:11

PM

6:3

5:3

6 P

M6

:39

:01

PM

6:4

2:2

6 P

M6

:45

:51

PM

6:4

9:1

6 P

M6

:52

:41

PM

6:5

6:0

6 P

M6

:59

:31

PM

7:0

2:5

6 P

M7

:06

:21

PM

\\172.16.64.8\Memory\Available MBytes

0

100000

200000

300000

400000

500000

600000

700000

5:1

3:3

5 P

M5

:17

:05

PM

5:2

0:3

6 P

M5

:24

:06

PM

5:2

7:3

6 P

M5

:31

:06

PM

5:3

4:3

6 P

M5

:38

:06

PM

5:4

1:3

6 P

M5

:45

:06

PM

5:4

8:3

6 P

M5

:52

:06

PM

5:5

5:3

6 P

M5

:59

:06

PM

6:0

2:3

6 P

M6

:06

:06

PM

6:0

9:3

6 P

M6

:13

:06

PM

6:1

6:3

6 P

M6

:20

:06

PM

6:2

3:3

6 P

M6

:27

:06

PM

6:3

0:3

6 P

M6

:34

:06

PM

6:3

7:3

6 P

M6

:41

:06

PM

6:4

4:3

6 P

M6

:48

:06

PM

6:5

1:3

6 P

M6

:55

:06

PM

6:5

8:3

6 P

M7

:02

:06

PM

7:0

5:3

6 P

M

\\172.16.64.8\Network Interface(vmxnet3 Ethernet Adapter)\Bytes Received/sec

\\172.16.64.8\Network Interface(vmxnet3 Ethernet Adapter)\Bytes Sent/sec

Page 267: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Figure 70: XENDESKTOP1 Broker Server Disk Queue Lengths

Figure 71: XENDESKTOP1 Broker Server Disk IO Operations

0

0.5

1

1.5

2

2.5

3

3.55

:13

:35

PM

5:1

6:5

5 P

M5

:20

:16

PM

5:2

3:3

6 P

M5

:26

:56

PM

5:3

0:1

6 P

M5

:33

:36

PM

5:3

6:5

6 P

M5

:40

:16

PM

5:4

3:3

6 P

M5

:46

:56

PM

5:5

0:1

6 P

M5

:53

:36

PM

5:5

6:5

6 P

M6

:00

:16

PM

6:0

3:3

6 P

M6

:06

:56

PM

6:1

0:1

6 P

M6

:13

:36

PM

6:1

6:5

6 P

M6

:20

:16

PM

6:2

3:3

6 P

M6

:26

:56

PM

6:3

0:1

6 P

M6

:33

:36

PM

6:3

6:5

6 P

M6

:40

:16

PM

6:4

3:3

6 P

M6

:46

:56

PM

6:5

0:1

6 P

M6

:53

:36

PM

6:5

6:5

6 P

M7

:00

:16

PM

7:0

3:3

6 P

M7

:06

:56

PM

\\172.16.64.8\PhysicalDisk(0 C:)\Current Disk Queue Length

\\172.16.64.8\PhysicalDisk(0 C:)\Avg. Disk Queue Length

\\172.16.64.8\PhysicalDisk(0 C:)\Avg. Disk Read Queue Length

\\172.16.64.8\PhysicalDisk(0 C:)\Avg. Disk Write Queue Length

0

50

100

150

200

250

5:1

3:3

5 P

M

5:1

7:0

0 P

M

5:2

0:2

6 P

M

5:2

3:5

1 P

M

5:2

7:1

6 P

M

5:3

0:4

1 P

M

5:3

4:0

6 P

M

5:3

7:3

1 P

M

5:4

0:5

6 P

M

5:4

4:2

1 P

M

5:4

7:4

6 P

M

5:5

1:1

1 P

M

5:5

4:3

6 P

M

5:5

8:0

1 P

M

6:0

1:2

6 P

M

6:0

4:5

1 P

M

6:0

8:1

6 P

M

6:1

1:4

1 P

M

6:1

5:0

6 P

M

6:1

8:3

1 P

M

6:2

1:5

6 P

M

6:2

5:2

1 P

M

6:2

8:4

6 P

M

6:3

2:1

1 P

M

6:3

5:3

6 P

M

6:3

9:0

1 P

M

6:4

2:2

6 P

M

6:4

5:5

1 P

M

6:4

9:1

6 P

M

6:5

2:4

1 P

M

6:5

6:0

6 P

M

6:5

9:3

1 P

M

7:0

2:5

6 P

M

7:0

6:2

1 P

M

\\172.16.64.8\PhysicalDisk(0 C:)\Disk Transfers/sec

\\172.16.64.8\PhysicalDisk(0 C:)\Disk Reads/sec

\\172.16.64.8\PhysicalDisk(0 C:)\Disk Writes/sec

Page 268: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Scalability Considerations and Guidelines There are many factors to consider when you begin to scale beyond 1000 Users, two chassis 8 mixed workload

VDI/HSD host server configuration, which this reference architecture has successfully tested. In this section we give

guidance to scale beyond the 1000 user system.

Cisco UCS System Scalability As our results indicate, we have proven linear scalability in the Cisco UCS Reference Architecture as tested.

Cisco UCS 2.2(1d) management software supports up to 20 chassis within a single Cisco UCS domain on

our second generation Cisco UCS Fabric Interconnect 6248 and 6296 models. Our single UCS domain can

grow to 160 blades.

With Cisco UCS 2.2(1d) management software, released in March 2014, each UCS 2.2(1c) Management

domain is extensibly manageable by UCS Central, our new manager of managers, vastly increasing the

reach of the UCS system.

As scale grows, the value of the combined UCS fabric, Nexus physical switches and Nexus virtual switches

increases dramatically to define the Quality of Services required to deliver excellent end user experience

100% of the time.

To accommodate the Cisco Nexus 5500 upstream connectivity in the way we describe in the LAN and

SAN Configuration section, we need two Ethernet uplinks to be configured on the Cisco UCS Fabric

interconnect. And based on the number of uplinks from each chassis, we can calculate number of desktops

can be hosted in a single UCS domain. Assuming eight links per chassis, four to each 6248, scaling beyond

10 chassis would require a pair of Cisco UCS 6296 fabric interconnects.

A 25,000 virtual desktop building block, managed by a single UCS domain, with its support infrastructure

services can be built out from the RA described in this study with eight links per chassis and 152 Cisco

UCS B200 M3 Servers and 8 infrastructure blades configured per the specifications in this document in 20

chassis.

Of course, the backend storage has to be scaled accordingly, based on the IOP considerations as described in the

EMC scaling section. Please refer the EMC section that follows this one for scalability guidelines.

Scalability of Citrix XenDesktop 7.5 Configuration XenDesktop environments can scale to large numbers. When implementing Citrix XenDesktop, consider the

following in scaling the number of hosted shared and hosted virtual desktops:

Types of storage in your environment Types of desktops that will be deployed Data protection requirements For Citrix Provisioning Server pooled desktops, the write cache sizing and placement

These and other various aspects of scalability considerations are described in greater detail in “XenDesktop -

Modular Reference Architecture” document and should be a part of any XenDesktop design.

When designing and deploying this CVD environment, best practices were followed including the following:

Citrix recommends using N+1 schema for virtualization host servers to accommodate resiliency. In all

Reference Architectures (such as this CVD), this recommendation is applied to all host servers.

All Provisioning Server Network Adapters are configured to have a static IP and management.

We used the XenDesktop Setup Wizard in PVS. Wizard does an excellent job of creating the desktops

automatically and it's possible to run multiple instances of the wizard, provided the deployed desktops are

placed in different catalogs and have different naming conventions. To use the PVS XenDesktop Setup

Wizard, at a minimum you need to install the Provisioning Server, the XenDesktop Controller, and

configure hosts, as well as create VM templates on all datastores where desktops will be deployed.

Page 269: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

EMC VNX5400 storage guidelines for Mixed Desktops

Virtualization workload Sizing VNX storage system to meet virtual desktop IOPS requirement is a complicated process. When an I/O

reaches the VNX storage, it is served by several components such as Data Mover (NFS), backend dynamic random

access memory (DRAM) cache, FAST Cache, and disks. To reduce the complexity, EMC recommends using a

building block approach to scale to thousands of virtual desktops.

For more information on storage sizing guidelines to implement your end-user computing solution on VNX unified

storage systems, refer to https://mainstayadvisor.com/Default.aspx?t=EMC&atid=224a9bcc-caa3-48f4-8ff8-

33051513c410

References This section provides links to additional information for each partner’s solution component of this document.

Cisco Reference Documents

Cisco Unified Computing System Manager Home Page http://www.cisco.com/en/US/products/ps10281/index.html

Cisco UCS B200 M3 Blade Server Resources http://www.cisco.com/en/US/products/ps10280/index.html

Cisco UCS 6200 Series Fabric Interconnects

http://www.cisco.com/en/US/products/ps11544/index.html

Cisco Nexus 5500 Series Switches Resources

http://www.cisco.com/en/US/products/ps9670/index.html

Download Cisco UCS Manager and Blade Software Version 2.2(1d)

http://software.cisco.com/download/release.html?mdfid=283612660&softwareid=283655658&release=1.4(4l)&reli

nd=AVAILABLE&rellifecycle=&reltype=latest

Download Cisco UCS Central Software Version 1.1(1b)

http://software.cisco.com/download/release.html?mdfid=284308174&softwareid=284308194&re

lease=1.1(1b)&relind=AVAILABLE&rellifecycle=&reltype=latest&i=rs

Citrix Reference Documents Citrix Product Downloads

http://www.citrix.com/downloads/xendesktop.html

Citrix Knowledge Center

http://support.citrix.com

Citrix XenDesktop 7.5 Documentation

http://support.citrix.com/proddocs/topic/xenapp-xendesktop/cds-xenapp-xendesktop-75-landing.html

Citrix Provisioning Services

http://support.citrix.com/proddocs/topic/provisioning-7/pvs-provisioning-7.html

Citrix User Profile Management

http://support.citrix.com/proddocs/topic/user-profile-manager-5-x/upm-wrapper-kib.html

Page 270: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

EMC References EMC VSPEX End User Computing Solution Overview

EMC VSPEX End-User Computing: Citrix XenDesktop 7 and VMware vSphere for up to 2,000 Virtual Desktops –

Design Guide

EMC VSPEX End-User Computing: Citrix XenDesktop 7 and VMware vSphere for up to 2,000 Virtual Desktops –

Implementation Guide

EMC VSPEX End-User Computing: Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops

– Design Guide

EMC VSPEX End-User Computing: Citrix XenDesktop 7 and Microsoft Hyper-V for up to 2,000 Virtual Desktops

– Implementation Guide

VMware References VMware vCenter Server

http://www.vmware.com/products/vcenter-server/overview.html

VMware vSphere:

http://www.vmware.com/products/vsphere/

Login VSI http://loginvsi.com

Page 271: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Appendix A–Cisco Nexus 5548UP Configurations

Nexus5K-1

version 5.2(1)N1(7)

feature fcoe

hostname l4a12-nexus5k-1

feature npiv

feature telnet

cfs eth distribute

feature lacp

feature vpc

feature lldp

username admin password 5 $1$dXHe/D1d$bzimTuTVfEl3xH8MV63R7/ role

network-admin

no password strength-check

banner motd #Nexus 5000 Switch

#

ip domain-lookup

policy-map type network-qos pm-nq-global

class type network-qos class-default

mtu 9216

multicast-optimize

set cos 5

system qos

service-policy type queuing input fcoe-default-in-policy

service-policy type queuing output fcoe-default-out-policy

service-policy type network-qos pm-nq-global

policy-map type control-plane copp-system-policy-customized

class copp-system-class-default

police cir 2048 kbps bc 6400000 bytes

slot 1

port 29-32 type fc

snmp-server user admin network-admin auth md5

0xcf27bcafe85f702841ec8eb8b1685651 priv 0xcf27bcafe85f702841ec8eb8b1685651

localizedkey

vrf context management

ip route 0.0.0.0/0 10.6.116.1

vlan 1

vlan 272

name cisco-vspex

vlan 273

name solutions-vdi-1

vlan 274

name solutions-vdi-2

vlan 275

name solutions-vdi-3

Page 272: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

vlan 276

name solutions-vdi-4

vlan 277

name solutions-vdi-5

vlan 278

name Solutions-vdi-6

vlan 279

name solutions-vdi-7

vlan 516

name solutions-10.6.116

vlan 517

name solutions-10.6.117

vpc domain 101

peer-keepalive destination 10.6.116.40

peer-gateway

port-profile default max-ports 512

interface port-channel1

description VPC-Peerlink

switchport mode trunk

spanning-tree port type network

speed 10000

vpc peer-link

interface port-channel13

description to FI-5A

switchport mode trunk

vpc 13

interface port-channel14

description to FI-5B

switchport mode trunk

vpc 14

interface port-channel18

description to FI-6A

switchport mode trunk

vpc 18

interface port-channel19

description to FI-6B

switchport mode trunk

vpc 19

interface port-channel25

description to DM2-0

switchport mode trunk

untagged cos 5

switchport trunk allowed vlan 272-276,514-527

vpc 25

interface port-channel26

description to DM3-0

Page 273: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

switchport mode trunk

untagged cos 5

switchport trunk allowed vlan 272-276,514-527

vpc 26

interface port-channel28

description uplink to nex1-rack700h

switchport mode trunk

vpc 28

interface Ethernet1/1

description uplink to rtpsol-ucs5-A14

switchport mode trunk

channel-group 13 mode active

interface Ethernet1/2

description uplink to rtpsol-ucs5-B14

switchport mode trunk

channel-group 14 mode active

interface Ethernet1/3

description uplink to rtpsol-ucs6-A14

switchport mode trunk

channel-group 18 mode active

interface Ethernet1/4

description uplink to rtpsol-ucs6-B14

switchport mode trunk

channel-group 19 mode active

interface Ethernet1/5

description to rtpsol44-dm2-0

switchport mode trunk

switchport trunk allowed vlan 272-276,514-527

spanning-tree port type edge trunk

channel-group 25 mode active

interface Ethernet1/6

description to rtpsol44-dm3-0

switchport mode trunk

switchport trunk allowed vlan 272-276,514-527

spanning-tree port type edge trunk

channel-group 26 mode active

interface Ethernet1/7

switchport mode trunk

interface Ethernet1/8

switchport mode trunk

interface Ethernet1/9

switchport mode trunk

interface Ethernet1/10

Page 274: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

switchport mode trunk

interface Ethernet1/11

switchport mode trunk

interface Ethernet1/12

switchport mode trunk

interface Ethernet1/13

switchport mode trunk

interface Ethernet1/14

description uplink to .116 g/7

switchport mode trunk

switchport trunk allowed vlan 174,272-276,516-527

speed 1000

interface Ethernet1/15

switchport mode trunk

channel-group 1 mode active

interface Ethernet1/16

description uplink to l4a12-nexus5k-2 16

switchport mode trunk

channel-group 1 mode active

interface Ethernet1/17

description rtpsol44-iscsi-a0

untagged cos 5

switchport mode trunk

spanning-tree port type edge trunk

interface Ethernet1/18

description rtpsol44-iscsi-a1

untagged cos 5

switchport mode trunk

spanning-tree port type edge trunk

interface Ethernet1/19

switchport mode trunk

interface Ethernet1/20

switchport mode trunk

interface Ethernet1/21

switchport mode trunk

interface Ethernet1/22

switchport mode trunk

interface Ethernet1/23

switchport mode trunk

interface Ethernet1/24

Page 275: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

switchport mode trunk

interface Ethernet1/25

switchport mode trunk

interface Ethernet1/26

switchport mode trunk

interface Ethernet1/27

switchport mode trunk

interface Ethernet1/28

description uplink to nex1-rack700h

switchport mode trunk

channel-group 28 mode active

interface Ethernet2/1

interface Ethernet2/2

interface Ethernet2/3

interface Ethernet2/4

interface Ethernet2/5

interface Ethernet2/6

interface Ethernet2/7

interface Ethernet2/8

interface Ethernet2/9

interface Ethernet2/10

interface Ethernet2/11

interface Ethernet2/12

interface Ethernet2/13

interface Ethernet2/14

interface Ethernet2/15

interface Ethernet2/16

interface mgmt0

ip address 10.6.116.39/24

line console

line vty

boot kickstart bootflash:/n5000-uk9-kickstart.5.2.1.N1.7.bin

boot system bootflash:/n5000-uk9.5.2.1.N1.7.bin

Page 276: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled
Page 277: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Nexus5K-2

version 5.2(1)N1(7)

hostname l4a12-nexus5k-2

feature telnet

cfs eth distribute

feature lacp

feature vpc

feature lldp

username admin password 5 $1$jzWGzfZr$Qx3wA4jPr7JjPPBccxiYh. role

network-admin

no password strength-check

banner motd #Nexus 5000 Switch

#

ip domain-lookup

policy-map type network-qos pm-nq-global

class type network-qos class-default

mtu 9216

multicast-optimize

set cos 5

system qos

service-policy type network-qos pm-nq-global

service-policy type queuing input fcoe-default-in-policy

service-policy type queuing output fcoe-default-out-policy

policy-map type control-plane copp-system-policy-customized

class copp-system-class-default

police cir 2048 kbps bc 6400000 bytes

slot 1

port 29-32 type fc

snmp-server user admin network-admin auth md5

0xb1447442e6fec90ed20f37faec36f07f priv 0xb1447442e6fec90ed20f37faec36f07f

localizedkey

vrf context management

ip route 0.0.0.0/0 10.6.116.1

vlan 1

vlan 272

name cisco-vspex

vlan 273

name solutions-vdi-1

vlan 274

name solutions-vdi-2

vlan 275

name solutions-vdi-3

vlan 276

name solutions-vdi-4

vlan 277

name solutions-vdi-5

vlan 278

name Solutions-vdi-6

Page 278: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

vlan 279

name solutions-vdi-7

vlan 516

name solutions-10.6.116

vlan 517

name solutions-10.6.117

vpc domain 101

peer-keepalive destination 10.6.116.39

peer-gateway

port-profile default max-ports 512

interface port-channel1

description VPC-Peerlink

switchport mode trunk

spanning-tree port type network

speed 10000

vpc peer-link

interface port-channel13

description to FI-5A

switchport mode trunk

vpc 13

interface port-channel14

description to FI-5B

switchport mode trunk

vpc 14

interface port-channel18

description to FI-6A

switchport mode trunk

vpc 18

interface port-channel19

description to FI-6B

switchport mode trunk

vpc 19

interface port-channel25

description to DM2-1

switchport mode trunk

untagged cos 5

switchport trunk allowed vlan 272-276,514-527

vpc 25

interface port-channel26

description to DM3-1

switchport mode trunk

untagged cos 5

switchport trunk allowed vlan 272-276,514-527

vpc 26

interface port-channel28

Page 279: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

description uplink to nex1-rack700h

switchport mode trunk

vpc 28

interface Ethernet1/1

description uplink to rtpsol-ucs5-A13

switchport mode trunk

channel-group 13 mode active

interface Ethernet1/2

description uplink to rtpsol-ucs5-B13

switchport mode trunk

channel-group 14 mode active

interface Ethernet1/3

description uplink to rtpsol-ucs6-A13

switchport mode trunk

channel-group 18 mode active

interface Ethernet1/4

description uplink to rtpsol-ucs6-B13

switchport mode trunk

channel-group 19 mode active

interface Ethernet1/5

description to rtpsol44-dm2-1

switchport mode trunk

switchport trunk allowed vlan 272-276,514-527

spanning-tree port type edge trunk

channel-group 25 mode active

interface Ethernet1/6

description to rtpsol44-dm3-1

switchport mode trunk

switchport trunk allowed vlan 272-276,514-527

spanning-tree port type edge trunk

channel-group 26 mode active

interface Ethernet1/7

switchport mode trunk

interface Ethernet1/8

switchport mode trunk

interface Ethernet1/9

switchport mode trunk

interface Ethernet1/10

switchport mode trunk

interface Ethernet1/11

switchport mode trunk

interface Ethernet1/12

Page 280: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

switchport mode trunk

interface Ethernet1/13

switchport mode trunk

interface Ethernet1/14

switchport mode trunk

interface Ethernet1/15

switchport mode trunk

channel-group 1 mode active

interface Ethernet1/16

description uplink to l4a12-nexus5k-1 16

switchport mode trunk

channel-group 1 mode active

interface Ethernet1/17

description rtpsol44-iscsi-b0

untagged cos 5

switchport mode trunk

spanning-tree port type edge trunk

interface Ethernet1/18

description rtpsol44-iscsi-b1

untagged cos 5

switchport mode trunk

spanning-tree port type edge trunk

interface Ethernet1/19

switchport mode trunk

interface Ethernet1/20

switchport mode trunk

interface Ethernet1/21

switchport mode trunk

interface Ethernet1/22

switchport mode trunk

interface Ethernet1/23

switchport mode trunk

interface Ethernet1/24

switchport mode trunk

interface Ethernet1/25

switchport mode trunk

interface Ethernet1/26

switchport mode trunk

interface Ethernet1/27

Page 281: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

switchport mode trunk

interface Ethernet1/28

switchport mode trunk

channel-group 28 mode active

interface mgmt0

ip address 10.6.116.40/24

line console

line vty

boot kickstart bootflash:/n5000-uk9-kickstart.5.2.1.N1.7.bin

boot system bootflash:/n5000-uk9.5.2.1.N1.7.bin

Appendix B–Cisco Nexus 1000V VSM

Configuration

version 4.2(1)SV2(2.2)

svs switch edition essential

feature telnet

username admin password 5 $1$hHzgvfVd$OHHoGrgtWHxOhUW3SLg48/ role

network-admin

banner motd #Nexus 1000v Switch#

ssh key rsa 2048

ip domain-lookup

ip host N1KVswitch 172.16.64.39

hostname N1KVswitch

errdisable recovery cause failed-port-state

policy-map type qos jumbo-mtu

policy-map type qos n1kv-policy

class class-default

set cos 5

vem 3

host id f45487e9-e5ee-e211-600d-000000000012

vem 4

host id f45487e9-e5ee-e211-600d-000000000002

vem 5

host id f45487e9-e5ee-e211-600d-000000000007

vem 6

host id f45487e9-e5ee-e211-600d-000000000008

snmp-server user admin network-admin auth md5

0xe90e1fbf4aa6dd0ccde9bb8f2db21c3f priv 0xe90e1fbf4aa6dd0ccde9bb8f2db21c3f

localizedkey

vrf context management

ip route 0.0.0.0/0 172.16.64.1

vlan 1,272,516

vlan 272

name private

Page 282: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

vlan 516

name public

port-channel load-balance ethernet source-mac

port-profile default max-ports 32

port-profile type ethernet Unused_Or_Quarantine_Uplink

vmware port-group

shutdown

description Port-group created for Nexus1000V internal usage. Do not

use.

state enabled

port-profile type vethernet Unused_Or_Quarantine_Veth

vmware port-group

shutdown

description Port-group created for Nexus1000V internal usage. Do not

use.

state enabled

port-profile type ethernet system-uplink

vmware port-group

switchport mode trunk

switchport trunk allowed vlan 272,516

mtu 9000

channel-group auto mode on mac-pinning

no shutdown

system vlan 272,516

state enabled

port-profile type vethernet n1kv-l3

capability l3control

vmware port-group

switchport mode access

switchport access vlan 272

service-policy type qos input n1kv-policy

no shutdown

system vlan 272

state enabled

port-profile type vethernet vm-network

vmware port-group

switchport mode access

switchport access vlan 272

service-policy type qos input n1kv-policy

no shutdown

system vlan 272

max-ports 1024

state enabled

vdc N1KVswitch id 1

limit-resource vlan minimum 16 maximum 2049

limit-resource monitor-session minimum 0 maximum 2

limit-resource vrf minimum 16 maximum 8192

limit-resource port-channel minimum 0 maximum 768

limit-resource u4route-mem minimum 1 maximum 1

limit-resource u6route-mem minimum 1 maximum 1

interface port-channel1

Page 283: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

inherit port-profile system-uplink

vem 3

interface port-channel2

inherit port-profile system-uplink

vem 4

interface mgmt0

ip address 172.16.64.39/19

interface control0

line console

boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV2.2.2.bin sup-1

boot system bootflash:/nexus-1000v.4.2.1.SV2.2.2.bin sup-1

boot kickstart bootflash:/nexus-1000v-kickstart.4.2.1.SV2.2.2.bin sup-2

boot system bootflash:/nexus-1000v.4.2.1.SV2.2.2.bin sup-2

svs-domain

domain id 1

control vlan 1

packet vlan 1

svs mode L3 interface mgmt0

svs connection vcenter

protocol vmware-vim

remote ip address 172.16.64.4 port 80

vmware dvs uuid "49 40 31 50 7e 6a d6 e8-a3 c2 f7 e5 00 32 8b 5d"

datacenter-name CVSPEX-DT

admin user n1kUser

max-ports 8192

connect

vservice global type vsg

tcp state-checks invalid-ack

tcp state-checks seq-past-window

no tcp state-checks window-variation

no bypass asa-traffic

vnm-policy-agent

registration-ip 0.0.0.0

shared-secret **********

log-level

Page 284: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

Appendix C–Server Performance Charts for Mixed

Workload Scale Test Run

1000u-mix-32l-bm-0607-1713-vsphere2

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

9:1

3:2

9 P

M

9:1

7:0

3 P

M

9:2

0:3

8 P

M

9:2

4:1

5 P

M

9:2

7:5

2 P

M

9:3

1:2

9 P

M

9:3

5:0

7 P

M

9:3

8:4

5 P

M

9:4

2:2

3 P

M

9:4

6:0

1 P

M

9:4

9:3

9 P

M

9:5

3:1

7 P

M

9:5

6:5

5 P

M

10

:00

:33

PM

10

:04

:11

PM

10

:07

:49

PM

10

:11

:27

PM

10

:15

:05

PM

10

:18

:43

PM

10

:22

:22

PM

10

:26

:00

PM

10

:29

:38

PM

10

:33

:16

PM

10

:36

:55

PM

10

:40

:33

PM

10

:44

:11

PM

10

:47

:49

PM

10

:51

:27

PM

10

:55

:04

PM

10

:58

:42

PM

11

:02

:20

PM

11

:05

:58

PM

\\vsphere2.cvspex.rtp.lab.emc.com\Memory\NonKernel MBytes

Page 285: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

0

200

400

600

800

1000

1200

1400

1600

1800

9:1

3:2

9 P

M9

:18

:56

PM

9:2

4:2

5 P

M9

:29

:56

PM

9:3

5:2

8 P

M9

:41

:00

PM

9:4

6:3

2 P

M9

:52

:05

PM

9:5

7:3

7 P

M1

0:0

3:0

9 P

M1

0:0

8:4

1 P

M1

0:1

4:1

3 P

M1

0:1

9:4

5 P

M1

0:2

5:1

8 P

M1

0:3

0:5

1 P

M1

0:3

6:2

4 P

M1

0:4

1:5

6 P

M1

0:4

7:2

8 P

M1

0:5

3:0

0 P

M1

0:5

8:3

2 P

M1

1:0

4:0

4 P

M

\\vsphere2.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108876:vmnic0)\MBitsReceived/sec

\\vsphere2.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108876:vmnic0)\MBitsTransmitted/sec

\\vsphere2.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108877:vmnic1)\MBitsReceived/sec

\\vsphere2.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108877:vmnic1)\MBitsTransmitted/sec

0

10

20

30

40

50

60

70

80

90

9:1

3:2

9 P

M9

:16

:43

PM

9:1

9:5

7 P

M9

:23

:13

PM

9:2

6:2

9 P

M9

:29

:46

PM

9:3

3:0

3 P

M9

:36

:20

PM

9:3

9:3

7 P

M9

:42

:54

PM

9:4

6:1

1 P

M9

:49

:29

PM

9:5

2:4

6 P

M9

:56

:03

PM

9:5

9:2

1 P

M1

0:0

2:3

8 P

M1

0:0

5:5

5 P

M1

0:0

9:1

2 P

M1

0:1

2:2

9 P

M1

0:1

5:4

7 P

M1

0:1

9:0

4 P

M1

0:2

2:2

2 P

M1

0:2

5:3

9 P

M1

0:2

8:5

7 P

M1

0:3

2:1

4 P

M1

0:3

5:3

2 P

M1

0:3

8:4

9 P

M1

0:4

2:0

6 P

M1

0:4

5:2

3 P

M1

0:4

8:4

1 P

M1

0:5

1:5

8 P

M1

0:5

5:1

5 P

M1

0:5

8:3

2 P

M1

1:0

1:4

9 P

M1

1:0

5:0

6 P

M

\\vsphere2.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Core Util Time

Page 286: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

1000u-mix-32l-bm-0607-1713-vsphere3

0

10

20

30

40

50

60

9:1

3:2

9 P

M9

:16

:43

PM

9:1

9:5

7 P

M9

:23

:13

PM

9:2

6:2

9 P

M9

:29

:46

PM

9:3

3:0

3 P

M9

:36

:20

PM

9:3

9:3

7 P

M9

:42

:54

PM

9:4

6:1

1 P

M9

:49

:29

PM

9:5

2:4

6 P

M9

:56

:03

PM

9:5

9:2

1 P

M1

0:0

2:3

8 P

M1

0:0

5:5

5 P

M1

0:0

9:1

2 P

M1

0:1

2:2

9 P

M1

0:1

5:4

7 P

M1

0:1

9:0

4 P

M1

0:2

2:2

2 P

M1

0:2

5:3

9 P

M1

0:2

8:5

7 P

M1

0:3

2:1

4 P

M1

0:3

5:3

2 P

M1

0:3

8:4

9 P

M1

0:4

2:0

6 P

M1

0:4

5:2

3 P

M1

0:4

8:4

1 P

M1

0:5

1:5

8 P

M1

0:5

5:1

5 P

M1

0:5

8:3

2 P

M1

1:0

1:4

9 P

M1

1:0

5:0

6 P

M

\\vsphere2.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Util Time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

9:1

3:3

0 P

M

9:1

7:0

4 P

M

9:2

0:4

0 P

M

9:2

4:1

6 P

M

9:2

7:5

3 P

M

9:3

1:3

1 P

M

9:3

5:0

9 P

M

9:3

8:4

7 P

M

9:4

2:2

5 P

M

9:4

6:0

3 P

M

9:4

9:4

1 P

M

9:5

3:1

9 P

M

9:5

6:5

7 P

M

10

:00

:35

PM

10

:04

:13

PM

10

:07

:51

PM

10

:11

:29

PM

10

:15

:06

PM

10

:18

:45

PM

10

:22

:23

PM

10

:26

:02

PM

10

:29

:40

PM

10

:33

:18

PM

10

:36

:56

PM

10

:40

:34

PM

10

:44

:12

PM

10

:47

:50

PM

10

:51

:28

PM

10

:55

:07

PM

10

:58

:44

PM

11

:02

:22

PM

11

:06

:00

PM

\\vsphere3.cvspex.rtp.lab.emc.com\Memory\NonKernel MBytes

Page 287: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

0

200

400

600

800

1000

1200

1400

1600

1800

9:1

3:3

0 P

M9

:18

:57

PM

9:2

4:2

6 P

M9

:29

:58

PM

9:3

5:2

9 P

M9

:41

:02

PM

9:4

6:3

4 P

M9

:52

:06

PM

9:5

7:3

8 P

M1

0:0

3:1

1 P

M1

0:0

8:4

3 P

M1

0:1

4:1

4 P

M1

0:1

9:4

7 P

M1

0:2

5:2

0 P

M1

0:3

0:5

3 P

M1

0:3

6:2

5 P

M1

0:4

1:5

7 P

M1

0:4

7:3

0 P

M1

0:5

3:0

2 P

M1

0:5

8:3

4 P

M1

1:0

4:0

6 P

M

\\vsphere3.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108876:vmnic0)\MBitsReceived/sec

\\vsphere3.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108876:vmnic0)\MBitsTransmitted/sec

\\vsphere3.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108877:vmnic1)\MBitsReceived/sec

\\vsphere3.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108877:vmnic1)\MBitsTransmitted/sec

0

10

20

30

40

50

60

70

80

90

9:1

3:3

0 P

M9

:16

:44

PM

9:1

9:5

8 P

M9

:23

:14

PM

9:2

6:3

1 P

M9

:29

:47

PM

9:3

3:0

4 P

M9

:36

:21

PM

9:3

9:3

9 P

M9

:42

:56

PM

9:4

6:1

3 P

M9

:49

:31

PM

9:5

2:4

8 P

M9

:56

:05

PM

9:5

9:2

2 P

M1

0:0

2:4

0 P

M1

0:0

5:5

7 P

M1

0:0

9:1

4 P

M1

0:1

2:3

1 P

M1

0:1

5:4

8 P

M1

0:1

9:0

6 P

M1

0:2

2:2

3 P

M1

0:2

5:4

1 P

M1

0:2

8:5

9 P

M1

0:3

2:1

6 P

M1

0:3

5:3

3 P

M1

0:3

8:5

0 P

M1

0:4

2:0

8 P

M1

0:4

5:2

5 P

M1

0:4

8:4

2 P

M1

0:5

2:0

0 P

M1

0:5

5:1

7 P

M1

0:5

8:3

4 P

M1

1:0

1:5

1 P

M1

1:0

5:0

8 P

M

\\vsphere3.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Core Util Time

Page 288: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

1000u-mix-32l-bm-0607-1713-vsphere10

0

10

20

30

40

50

60

9:1

3:3

0 P

M9

:16

:44

PM

9:1

9:5

8 P

M9

:23

:14

PM

9:2

6:3

1 P

M9

:29

:47

PM

9:3

3:0

4 P

M9

:36

:21

PM

9:3

9:3

9 P

M9

:42

:56

PM

9:4

6:1

3 P

M9

:49

:31

PM

9:5

2:4

8 P

M9

:56

:05

PM

9:5

9:2

2 P

M1

0:0

2:4

0 P

M1

0:0

5:5

7 P

M1

0:0

9:1

4 P

M1

0:1

2:3

1 P

M1

0:1

5:4

8 P

M1

0:1

9:0

6 P

M1

0:2

2:2

3 P

M1

0:2

5:4

1 P

M1

0:2

8:5

9 P

M1

0:3

2:1

6 P

M1

0:3

5:3

3 P

M1

0:3

8:5

0 P

M1

0:4

2:0

8 P

M1

0:4

5:2

5 P

M1

0:4

8:4

2 P

M1

0:5

2:0

0 P

M1

0:5

5:1

7 P

M1

0:5

8:3

4 P

M1

1:0

1:5

1 P

M1

1:0

5:0

8 P

M

\\vsphere3.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Util Time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

9:1

3:3

4 P

M

9:1

7:0

8 P

M

9:2

0:4

3 P

M

9:2

4:2

0 P

M

9:2

7:5

7 P

M

9:3

1:3

5 P

M

9:3

5:1

3 P

M

9:3

8:5

1 P

M

9:4

2:2

9 P

M

9:4

6:0

7 P

M

9:4

9:4

5 P

M

9:5

3:2

4 P

M

9:5

7:0

2 P

M

10

:00

:40

PM

10

:04

:18

PM

10

:07

:56

PM

10

:11

:34

PM

10

:15

:12

PM

10

:18

:50

PM

10

:22

:28

PM

10

:26

:07

PM

10

:29

:45

PM

10

:33

:23

PM

10

:37

:02

PM

10

:40

:40

PM

10

:44

:19

PM

10

:47

:57

PM

10

:51

:35

PM

10

:55

:13

PM

10

:58

:51

PM

11

:02

:29

PM

11

:06

:07

PM

\\vsphere10.cvspex.rtp.lab.emc.com\Memory\NonKernel MBytes

Page 289: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

0

200

400

600

800

1000

1200

1400

1600

1800

2000

9:1

3:3

4 P

M

9:1

9:2

1 P

M

9:2

5:1

2 P

M

9:3

1:0

4 P

M

9:3

6:5

7 P

M

9:4

2:5

0 P

M

9:4

8:4

3 P

M

9:5

4:3

6 P

M

10

:00

:30

PM

10

:06

:23

PM

10

:12

:15

PM

10

:18

:08

PM

10

:24

:02

PM

10

:29

:55

PM

10

:35

:49

PM

10

:41

:43

PM

10

:47

:37

PM

10

:53

:30

PM

10

:59

:22

PM

11

:05

:15

PM

\\vsphere10.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108876:vmnic0)\MBitsReceived/sec

\\vsphere10.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108876:vmnic0)\MBitsTransmitted/sec

\\vsphere10.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108877:vmnic1)\MBitsReceived/sec

\\vsphere10.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108877:vmnic1)\MBitsTransmitted/sec

0

10

20

30

40

50

60

70

80

90

9:1

3:3

4 P

M9

:16

:48

PM

9:2

0:0

2 P

M9

:23

:18

PM

9:2

6:3

4 P

M9

:29

:51

PM

9:3

3:0

8 P

M9

:36

:25

PM

9:3

9:4

3 P

M9

:43

:00

PM

9:4

6:1

8 P

M9

:49

:35

PM

9:5

2:5

2 P

M9

:56

:10

PM

9:5

9:2

7 P

M1

0:0

2:4

5 P

M1

0:0

6:0

2 P

M1

0:0

9:1

9 P

M1

0:1

2:3

6 P

M1

0:1

5:5

3 P

M1

0:1

9:1

1 P

M1

0:2

2:2

8 P

M1

0:2

5:4

6 P

M1

0:2

9:0

3 P

M1

0:3

2:2

1 P

M1

0:3

5:3

9 P

M1

0:3

8:5

6 P

M1

0:4

2:1

4 P

M1

0:4

5:3

1 P

M1

0:4

8:4

9 P

M1

0:5

2:0

7 P

M1

0:5

5:2

4 P

M1

0:5

8:4

1 P

M1

1:0

1:5

8 P

M1

1:0

5:1

5 P

M

\\vsphere10.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Core Util Time

Page 290: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

1000u-mix-32l-bm-0607-1713-vsphere4

0

10

20

30

40

50

60

9:1

3:3

4 P

M9

:16

:48

PM

9:2

0:0

2 P

M9

:23

:18

PM

9:2

6:3

4 P

M9

:29

:51

PM

9:3

3:0

8 P

M9

:36

:25

PM

9:3

9:4

3 P

M9

:43

:00

PM

9:4

6:1

8 P

M9

:49

:35

PM

9:5

2:5

2 P

M9

:56

:10

PM

9:5

9:2

7 P

M1

0:0

2:4

5 P

M1

0:0

6:0

2 P

M1

0:0

9:1

9 P

M1

0:1

2:3

6 P

M1

0:1

5:5

3 P

M1

0:1

9:1

1 P

M1

0:2

2:2

8 P

M1

0:2

5:4

6 P

M1

0:2

9:0

3 P

M1

0:3

2:2

1 P

M1

0:3

5:3

9 P

M1

0:3

8:5

6 P

M1

0:4

2:1

4 P

M1

0:4

5:3

1 P

M1

0:4

8:4

9 P

M1

0:5

2:0

7 P

M1

0:5

5:2

4 P

M1

0:5

8:4

1 P

M1

1:0

1:5

8 P

M1

1:0

5:1

5 P

M

\\vsphere10.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Util Time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

9:1

3:3

1 P

M

9:1

7:0

4 P

M

9:2

0:3

8 P

M

9:2

4:1

2 P

M

9:2

7:4

6 P

M

9:3

1:2

0 P

M

9:3

4:5

4 P

M

9:3

8:2

8 P

M

9:4

2:0

2 P

M

9:4

5:3

6 P

M

9:4

9:1

0 P

M

9:5

2:4

4 P

M

9:5

6:1

8 P

M

9:5

9:5

2 P

M

10

:03

:26

PM

10

:07

:00

PM

10

:10

:33

PM

10

:14

:07

PM

10

:17

:41

PM

10

:21

:15

PM

10

:24

:49

PM

10

:28

:24

PM

10

:31

:58

PM

10

:35

:33

PM

10

:39

:07

PM

10

:42

:41

PM

10

:46

:16

PM

10

:49

:50

PM

10

:53

:24

PM

10

:56

:57

PM

11

:00

:31

PM

11

:04

:05

PM

11

:07

:39

PM

\\vsphere4.cvspex.rtp.lab.emc.com\Memory\NonKernel MBytes

Page 291: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

0

100

200

300

400

500

600

9:1

3:3

1 P

M

9:1

9:0

7 P

M

9:2

4:4

3 P

M

9:3

0:1

9 P

M

9:3

5:5

5 P

M

9:4

1:3

2 P

M

9:4

7:0

8 P

M

9:5

2:4

4 P

M

9:5

8:2

0 P

M

10

:03

:56

PM

10

:09

:32

PM

10

:15

:08

PM

10

:20

:45

PM

10

:26

:21

PM

10

:31

:58

PM

10

:37

:35

PM

10

:43

:12

PM

10

:48

:49

PM

10

:54

:25

PM

11

:00

:01

PM

11

:05

:37

PM

\\vsphere4.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108881:vmnic0)\MBitsReceived/sec

\\vsphere4.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108881:vmnic0)\MBitsTransmitted/sec

\\vsphere4.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108882:vmnic1)\MBitsReceived/sec

\\vsphere4.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108882:vmnic1)\MBitsTransmitted/sec

0

10

20

30

40

50

60

70

80

90

100

9:1

3:3

1 P

M

9:1

6:5

4 P

M

9:2

0:1

8 P

M

9:2

3:4

2 P

M

9:2

7:0

5 P

M

9:3

0:2

9 P

M

9:3

3:5

3 P

M

9:3

7:1

7 P

M

9:4

0:4

1 P

M

9:4

4:0

4 P

M

9:4

7:2

8 P

M

9:5

0:5

2 P

M

9:5

4:1

6 P

M

9:5

7:4

0 P

M

10

:01

:03

PM

10

:04

:27

PM

10

:07

:50

PM

10

:11

:14

PM

10

:14

:38

PM

10

:18

:02

PM

10

:21

:26

PM

10

:24

:49

PM

10

:28

:13

PM

10

:31

:38

PM

10

:35

:02

PM

10

:38

:26

PM

10

:41

:50

PM

10

:45

:14

PM

10

:48

:38

PM

10

:52

:02

PM

10

:55

:26

PM

10

:58

:49

PM

11

:02

:13

PM

11

:05

:37

PM

\\vsphere4.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Core Util Time

Page 292: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

1000u-mix-32l-bm-0607-1713-vsphere5

0

10

20

30

40

50

60

70

80

9:1

3:3

1 P

M

9:1

6:5

4 P

M

9:2

0:1

8 P

M

9:2

3:4

2 P

M

9:2

7:0

5 P

M

9:3

0:2

9 P

M

9:3

3:5

3 P

M

9:3

7:1

7 P

M

9:4

0:4

1 P

M

9:4

4:0

4 P

M

9:4

7:2

8 P

M

9:5

0:5

2 P

M

9:5

4:1

6 P

M

9:5

7:4

0 P

M

10

:01

:03

PM

10

:04

:27

PM

10

:07

:50

PM

10

:11

:14

PM

10

:14

:38

PM

10

:18

:02

PM

10

:21

:26

PM

10

:24

:49

PM

10

:28

:13

PM

10

:31

:38

PM

10

:35

:02

PM

10

:38

:26

PM

10

:41

:50

PM

10

:45

:14

PM

10

:48

:38

PM

10

:52

:02

PM

10

:55

:26

PM

10

:58

:49

PM

11

:02

:13

PM

11

:05

:37

PM

\\vsphere4.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Util Time

0

20000

40000

60000

80000

100000

120000

140000

160000

180000

200000

9:1

3:3

2 P

M

9:1

7:0

5 P

M

9:2

0:3

9 P

M

9:2

4:1

3 P

M

9:2

7:4

7 P

M

9:3

1:2

1 P

M

9:3

4:5

5 P

M

9:3

8:2

9 P

M

9:4

2:0

3 P

M

9:4

5:3

7 P

M

9:4

9:1

1 P

M

9:5

2:4

5 P

M

9:5

6:1

9 P

M

9:5

9:5

3 P

M

10

:03

:27

PM

10

:07

:01

PM

10

:10

:34

PM

10

:14

:08

PM

10

:17

:42

PM

10

:21

:16

PM

10

:24

:50

PM

10

:28

:24

PM

10

:31

:59

PM

10

:35

:34

PM

10

:39

:08

PM

10

:42

:43

PM

10

:46

:17

PM

10

:49

:51

PM

10

:53

:25

PM

10

:56

:59

PM

11

:00

:32

PM

11

:04

:06

PM

11

:07

:40

PM

\\vsphere5.cvspex.rtp.lab.emc.com\Memory\NonKernel MBytes

Page 293: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

0

100

200

300

400

500

600

700

9:1

3:3

2 P

M

9:1

9:0

7 P

M

9:2

4:4

4 P

M

9:3

0:2

0 P

M

9:3

5:5

6 P

M

9:4

1:3

3 P

M

9:4

7:0

9 P

M

9:5

2:4

5 P

M

9:5

8:2

2 P

M

10

:03

:58

PM

10

:09

:33

PM

10

:15

:09

PM

10

:20

:46

PM

10

:26

:22

PM

10

:31

:59

PM

10

:37

:36

PM

10

:43

:13

PM

10

:48

:50

PM

10

:54

:26

PM

11

:00

:02

PM

11

:05

:38

PM

\\vsphere5.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108881:vmnic0)\MBitsReceived/sec

\\vsphere5.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108881:vmnic0)\MBitsTransmitted/sec

\\vsphere5.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108882:vmnic1)\MBitsReceived/sec

\\vsphere5.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108882:vmnic1)\MBitsTransmitted/sec

0

10

20

30

40

50

60

70

80

90

100

9:1

3:3

2 P

M

9:1

6:5

5 P

M

9:2

0:1

9 P

M

9:2

3:4

3 P

M

9:2

7:0

6 P

M

9:3

0:3

0 P

M

9:3

3:5

4 P

M

9:3

7:1

8 P

M

9:4

0:4

2 P

M

9:4

4:0

6 P

M

9:4

7:2

9 P

M

9:5

0:5

3 P

M

9:5

4:1

7 P

M

9:5

7:4

1 P

M

10

:01

:05

PM

10

:04

:28

PM

10

:07

:52

PM

10

:11

:15

PM

10

:14

:39

PM

10

:18

:03

PM

10

:21

:26

PM

10

:24

:50

PM

10

:28

:14

PM

10

:31

:39

PM

10

:35

:03

PM

10

:38

:27

PM

10

:41

:51

PM

10

:45

:16

PM

10

:48

:40

PM

10

:52

:04

PM

10

:55

:27

PM

10

:58

:51

PM

11

:02

:14

PM

11

:05

:38

PM

\\vsphere5.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Core Util Time

Page 294: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

1000u-mix-32l-bm-0607-1713-vsphere12

0

10

20

30

40

50

60

70

80

9:1

3:3

2 P

M

9:1

6:5

5 P

M

9:2

0:1

9 P

M

9:2

3:4

3 P

M

9:2

7:0

6 P

M

9:3

0:3

0 P

M

9:3

3:5

4 P

M

9:3

7:1

8 P

M

9:4

0:4

2 P

M

9:4

4:0

6 P

M

9:4

7:2

9 P

M

9:5

0:5

3 P

M

9:5

4:1

7 P

M

9:5

7:4

1 P

M

10

:01

:05

PM

10

:04

:28

PM

10

:07

:52

PM

10

:11

:15

PM

10

:14

:39

PM

10

:18

:03

PM

10

:21

:26

PM

10

:24

:50

PM

10

:28

:14

PM

10

:31

:39

PM

10

:35

:03

PM

10

:38

:27

PM

10

:41

:51

PM

10

:45

:16

PM

10

:48

:40

PM

10

:52

:04

PM

10

:55

:27

PM

10

:58

:51

PM

11

:02

:14

PM

11

:05

:38

PM

\\vsphere5.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Util Time

0

20000

40000

60000

80000

100000

120000

140000

160000

9:1

3:3

5 P

M

9:1

7:0

8 P

M

9:2

0:4

2 P

M

9:2

4:1

6 P

M

9:2

7:5

0 P

M

9:3

1:2

4 P

M

9:3

4:5

8 P

M

9:3

8:3

2 P

M

9:4

2:0

6 P

M

9:4

5:4

0 P

M

9:4

9:1

4 P

M

9:5

2:4

8 P

M

9:5

6:2

2 P

M

9:5

9:5

6 P

M

10

:03

:29

PM

10

:07

:03

PM

10

:10

:36

PM

10

:14

:10

PM

10

:17

:44

PM

10

:21

:17

PM

10

:24

:51

PM

10

:28

:25

PM

10

:31

:59

PM

10

:35

:33

PM

10

:39

:07

PM

10

:42

:41

PM

10

:46

:15

PM

10

:49

:48

PM

10

:53

:22

PM

10

:56

:55

PM

11

:00

:29

PM

11

:04

:03

PM

11

:07

:37

PM

\\vsphere12.cvspex.rtp.lab.emc.com\Memory\NonKernel MBytes

Page 295: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

0

50

100

150

200

250

300

350

9:1

3:3

5 P

M

9:1

9:1

0 P

M

9:2

4:4

7 P

M

9:3

0:2

3 P

M

9:3

5:5

9 P

M

9:4

1:3

5 P

M

9:4

7:1

1 P

M

9:5

2:4

8 P

M

9:5

8:2

4 P

M

10

:04

:00

PM

10

:09

:35

PM

10

:15

:11

PM

10

:20

:47

PM

10

:26

:23

PM

10

:31

:59

PM

10

:37

:35

PM

10

:43

:11

PM

10

:48

:47

PM

10

:54

:23

PM

10

:59

:58

PM

11

:05

:35

PM

\\vsphere12.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108881:vmnic0)\MBitsReceived/sec

\\vsphere12.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108881:vmnic0)\MBitsTransmitted/sec

\\vsphere12.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108882:vmnic1)\MBitsReceived/sec

\\vsphere12.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108882:vmnic1)\MBitsTransmitted/sec

0

10

20

30

40

50

60

70

80

90

9:1

3:3

5 P

M

9:1

6:5

8 P

M

9:2

0:2

2 P

M

9:2

3:4

5 P

M

9:2

7:0

9 P

M

9:3

0:3

3 P

M

9:3

3:5

7 P

M

9:3

7:2

0 P

M

9:4

0:4

4 P

M

9:4

4:0

8 P

M

9:4

7:3

2 P

M

9:5

0:5

6 P

M

9:5

4:1

9 P

M

9:5

7:4

3 P

M

10

:01

:07

PM

10

:04

:30

PM

10

:07

:54

PM

10

:11

:17

PM

10

:14

:40

PM

10

:18

:04

PM

10

:21

:28

PM

10

:24

:51

PM

10

:28

:15

PM

10

:31

:39

PM

10

:35

:03

PM

10

:38

:26

PM

10

:41

:50

PM

10

:45

:14

PM

10

:48

:37

PM

10

:52

:01

PM

10

:55

:24

PM

10

:58

:47

PM

11

:02

:11

PM

11

:05

:35

PM

\\vsphere12.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Core Util Time

Page 296: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

1000u-mix-32l-bm-0607-1713-vsphere13

0

10

20

30

40

50

60

9:1

3:3

5 P

M

9:1

6:5

8 P

M

9:2

0:2

2 P

M

9:2

3:4

5 P

M

9:2

7:0

9 P

M

9:3

0:3

3 P

M

9:3

3:5

7 P

M

9:3

7:2

0 P

M

9:4

0:4

4 P

M

9:4

4:0

8 P

M

9:4

7:3

2 P

M

9:5

0:5

6 P

M

9:5

4:1

9 P

M

9:5

7:4

3 P

M

10

:01

:07

PM

10

:04

:30

PM

10

:07

:54

PM

10

:11

:17

PM

10

:14

:40

PM

10

:18

:04

PM

10

:21

:28

PM

10

:24

:51

PM

10

:28

:15

PM

10

:31

:39

PM

10

:35

:03

PM

10

:38

:26

PM

10

:41

:50

PM

10

:45

:14

PM

10

:48

:37

PM

10

:52

:01

PM

10

:55

:24

PM

10

:58

:47

PM

11

:02

:11

PM

11

:05

:35

PM

\\vsphere12.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Util Time

0

20000

40000

60000

80000

100000

120000

140000

160000

9:1

3:3

6 P

M

9:1

7:0

9 P

M

9:2

0:4

3 P

M

9:2

4:1

7 P

M

9:2

7:5

1 P

M

9:3

1:2

5 P

M

9:3

4:5

9 P

M

9:3

8:3

3 P

M

9:4

2:0

7 P

M

9:4

5:4

1 P

M

9:4

9:1

5 P

M

9:5

2:4

8 P

M

9:5

6:2

2 P

M

9:5

9:5

6 P

M

10

:03

:30

PM

10

:07

:04

PM

10

:10

:37

PM

10

:14

:11

PM

10

:17

:44

PM

10

:21

:18

PM

10

:24

:52

PM

10

:28

:26

PM

10

:32

:00

PM

10

:35

:34

PM

10

:39

:08

PM

10

:42

:42

PM

10

:46

:16

PM

10

:49

:50

PM

10

:53

:24

PM

10

:56

:58

PM

11

:00

:31

PM

11

:04

:05

PM

11

:07

:39

PM

\\vsphere13.cvspex.rtp.lab.emc.com\Memory\NonKernel MBytes

Page 297: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

0

100

200

300

400

500

600

9:1

3:3

6 P

M

9:1

9:1

1 P

M

9:2

4:4

8 P

M

9:3

0:2

4 P

M

9:3

6:0

0 P

M

9:4

1:3

6 P

M

9:4

7:1

2 P

M

9:5

2:4

8 P

M

9:5

8:2

5 P

M

10

:04

:01

PM

10

:09

:36

PM

10

:15

:12

PM

10

:20

:48

PM

10

:26

:24

PM

10

:32

:00

PM

10

:37

:36

PM

10

:43

:13

PM

10

:48

:49

PM

10

:54

:25

PM

11

:00

:01

PM

11

:05

:37

PM

\\vsphere13.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108881:vmnic0)\MBitsReceived/sec

\\vsphere13.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108881:vmnic0)\MBitsTransmitted/sec

\\vsphere13.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108882:vmnic1)\MBitsReceived/sec

\\vsphere13.cvspex.rtp.lab.emc.com\Network Port(DvsPortset-0:67108882:vmnic1)\MBitsTransmitted/sec

0

10

20

30

40

50

60

70

80

90

9:1

3:3

6 P

M

9:1

6:5

9 P

M

9:2

0:2

3 P

M

9:2

3:4

6 P

M

9:2

7:1

0 P

M

9:3

0:3

4 P

M

9:3

3:5

8 P

M

9:3

7:2

1 P

M

9:4

0:4

5 P

M

9:4

4:0

9 P

M

9:4

7:3

3 P

M

9:5

0:5

6 P

M

9:5

4:2

0 P

M

9:5

7:4

4 P

M

10

:01

:08

PM

10

:04

:31

PM

10

:07

:54

PM

10

:11

:18

PM

10

:14

:41

PM

10

:18

:05

PM

10

:21

:28

PM

10

:24

:52

PM

10

:28

:16

PM

10

:31

:40

PM

10

:35

:03

PM

10

:38

:27

PM

10

:41

:51

PM

10

:45

:15

PM

10

:48

:39

PM

10

:52

:03

PM

10

:55

:26

PM

10

:58

:50

PM

11

:02

:13

PM

11

:05

:37

PM

\\vsphere13.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Core Util Time

Page 298: Cisco Desktop Virtualization Solution for EMC VSPEX with ......using Citrix Provisioning Server 7.1 and Citrix XenDesktop 7.5, with a mix of hosted shared desktops (70%) and pooled

0

10

20

30

40

50

60

9:1

3:3

6 P

M

9:1

6:5

9 P

M

9:2

0:2

3 P

M

9:2

3:4

6 P

M

9:2

7:1

0 P

M

9:3

0:3

4 P

M

9:3

3:5

8 P

M

9:3

7:2

1 P

M

9:4

0:4

5 P

M

9:4

4:0

9 P

M

9:4

7:3

3 P

M

9:5

0:5

6 P

M

9:5

4:2

0 P

M

9:5

7:4

4 P

M

10

:01

:08

PM

10

:04

:31

PM

10

:07

:54

PM

10

:11

:18

PM

10

:14

:41

PM

10

:18

:05

PM

10

:21

:28

PM

10

:24

:52

PM

10

:28

:16

PM

10

:31

:40

PM

10

:35

:03

PM

10

:38

:27

PM

10

:41

:51

PM

10

:45

:15

PM

10

:48

:39

PM

10

:52

:03

PM

10

:55

:26

PM

10

:58

:50

PM

11

:02

:13

PM

11

:05

:37

PM

\\vsphere13.cvspex.rtp.lab.emc.com\Physical Cpu(_Total)\% Util Time