dell wyse datacenter reference architecture for dell · pdf file1 dell wyse datacenter –...

53
Dell Wyse Datacenter Reference Architecture for Dell VRTX and XenDesktop 2/21/2014 Phase 6 Version 1.1 THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. Copyright © 2014 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. Dell, the Dell logo, and the Dell badge are trademarks of Dell Inc. Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States and/or other countries. VMware is a registered trademark of VMware, Inc. Citrix and XenDesktop are registered trademarks of Citrix Systems, Inc. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.

Upload: hoangthuy

Post on 03-Feb-2018

234 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

Dell Wyse Datacenter

Reference Architecture for Dell VRTX and

XenDesktop

2/21/2014 Phase 6 Version 1.1 THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND. Copyright © 2014 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell. Dell, the Dell logo, and the Dell badge are trademarks of Dell Inc. Microsoft and Windows are registered trademarks of Microsoft Corporation in the United States and/or other countries. VMware is a registered trademark of VMware, Inc. Citrix and XenDesktop are registered trademarks of Citrix Systems, Inc. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.

Page 2: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

ii Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

Contents

1 Introduction ................................................................................ 1 1.1 Purpose of this document ....................................................................... 1 1.2 Scope ................................................................................................ 1

2 Solution Architecture Overview ........................................................ 2 2.1 Introduction ........................................................................................ 2

2.1.1 Physical Architecture Overview .......................................................... 2 2.1.2 Dell Wyse Datacenter – Solution Layers................................................. 3

2.2 Shared Tier 1 ....................................................................................... 3 2.2.1 VRTX – Up to 600 Users .................................................................... 3

3 Hardware Components ................................................................... 5 3.1 PowerEdge VRTX .................................................................................. 5 3.2 Network Configuration ........................................................................... 7

3.2.1 Network HA .................................................................................. 7 3.3 Server Configuration .............................................................................. 8 3.4 Optional Networking .............................................................................. 9

3.4.1 Force10 S55 (ToR Switch) ................................................................. 9 3.5 Dell Wyse Cloud Clients ........................................................................ 10

3.5.1 ThinOS – T10D ............................................................................. 10 3.5.2 ThinOS – D10D ............................................................................. 11 3.5.3 Windows Embedded 7 – Z90Q7 ......................................................... 11 3.5.4 Windows Embedded 8 – Z90Q8 ......................................................... 11 3.5.5 Suse Linux – Z50D ......................................................................... 11 3.5.6 Dell Wyse Zero – Xenith 2 ............................................................... 12 3.5.7 Dell Wyse Zero – Xenith Pro 2 .......................................................... 12 3.5.8 Dell Wyse Cloud Connect ................................................................ 12 3.5.9 Dell Venue 11 Pro ......................................................................... 13 3.5.10 Dell Chromebook 11 .................................................................... 13

4 Software Components .................................................................. 14 4.1 Citrix XenDesktop ............................................................................... 14

4.1.1 Machine Creation Services (MCS) ...................................................... 15 4.1.2 Citrix Personal vDisk Technology ...................................................... 16 4.1.3 Citrix Profile Manager .................................................................... 17 4.1.4 SQL Databases ............................................................................. 17 4.1.5 DNS .......................................................................................... 17 4.1.6 NetScaler Load Balancing ............................................................... 18

4.2 VDI Hypervisor Platforms ...................................................................... 18 4.2.1 VMware vSphere 5 ........................................................................ 18 4.2.2 Microsoft Windows Server 2012 R2 Hyper-V.......................................... 19

4.3 Citrix NetScaler .................................................................................. 19 4.4 Citrix CloudBridge ............................................................................... 21

5 Solution Architecture for XenDesktop 7 ............................................ 23 5.1 Hyper-V ........................................................................................... 23

5.1.1 Storage Architecture Overview ......................................................... 23 5.1.2 Compute + Management Infrastructure ............................................... 23 5.1.3 Virtual Networking ....................................................................... 25

5.2 vSphere ........................................................................................... 27

Page 3: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

iii Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

5.2.1 Compute + Management Server Infrastructure ...................................... 27 5.2.2 Storage Architecture Overview ......................................................... 28 5.2.3 Virtual Networking ....................................................................... 29

5.3 XenDesktop on VRTX Communication Flow ................................................. 31

6 Solution Performance and Testing ................................................... 32 6.1 Load Generation and Monitoring ............................................................. 32

6.1.1 Login VSI – Login Consultants ........................................................... 32 6.1.2 Liquidware Labs Stratusphere UX ...................................................... 32 6.1.3 VMware vCenter........................................................................... 33 6.1.4 Microsoft Perfmon ........................................................................ 33

6.2 Testing and Validation ......................................................................... 33 6.2.1 Testing Process ............................................................................ 33 6.2.2 Dell Wyse Datacenter Workloads and Profiles ....................................... 35

6.3 XenDesktop on VRTX Test Results ............................................................ 36 6.3.1 vSphere Summary ......................................................................... 36 6.3.2 vSphere 5.1 Update 1 Test Results .................................................... 37 6.3.3 Hyper-V Summary ......................................................................... 39 6.3.4 Microsoft Windows 2012 R2 Hyper-V Test Charts ................................... 41

About the Authors ......................................................................... 50

Page 4: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

1 Introduction

1.1 Purpose of this document

This document describes:

1. The Dell Wyse Datacenter Reference Architecture for Dell VRTX and Citrix XenDesktop (MCS) supporting the Remote Office/ Branch Office (ROBO) use case.

This document addresses the architecture design, configuration and implementation considerations for the key components of the architecture required to deliver virtual desktops via XenDesktop 7 on the Dell VRTX shared infrastructure platform using VMware vSphere and Windows Server Hyper-V.

1.2 Scope

Relative to delivering the virtual desktop environment, the objectives of this document are to:

● Define the detailed technical design for the solution.

● Define the hardware requirements to support the design.

● Define the design constraints which are relevant to the design.

● Define relevant risks, issues, assumptions and concessions – referencing existing ones where possible.

● Provide a breakdown of the design into key elements such that the reader receives an incremental or modular explanation of the design.

● Provide solution scaling and component selection guidance.

Page 5: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

2 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

2 Solution Architecture Overview

2.1 Introduction

The Dell Wyse Datacenter Solution leverages a core set of hardware and software components consisting of 5 primary layers:

● Networking Layer

● Compute Server Layer

● Management Server Layer

● Storage Layer

● Endpoint Layer

These components have been integrated and tested to provide the optimal balance of high performance and lowest cost per user. Additionally, the Dell Wyse Datacenter Solution includes an approved extended list of optional components in the same categories. These components give IT departments the flexibility to custom tailor the solution for environments with unique VDI feature, scale or performance needs.

2.1.1 Physical Architecture Overview

The core Dell Wyse Datacenter architecture for ROBO consists of the Shared Tier1 solution model within a single VRTX chassis. “Tier 1” in the VDI context defines the high performance disk source from which the VDI desktop sessions execute. Tier 2 defines storage that prioritizes capacity over performance for management VM and user data storage. The tiers are separated by default to maximize user performance. Dell Wyse Datacenter is a 100% virtualized solution architecture.

VDI DiskUser Data

MGMT + Compute Server

T1 Shared(15K SAS)

Mgmt VMs VDI VMs

T2 Shared (10K SAS)

Mgmt Disk

Shared Tier 1

Page 6: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

3 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

In the Shared Tier 1 solution model for the ROBO use case, the compute and management functions

are combined across all hosts with shared Tier 1 and Tier 2 defined by their rotational speed. This

“cluster in a box” methodology makes use of hypervisor clustering functionality to balance all VMs

across all hosts present in the VRTX chassis.

2.1.2 Dell Wyse Datacenter – Solution Layers

The Dell VRTX includes an integrated 8-port switch which can be connected up to an existing switching infrastructure or optional high performance Force10 switch. This integrated switch connects 2 x 1Gb NICs to the A fabric of all blade hosts in the chassis.

The compute and management layers are combined in this solution and consist of the server resources responsible for hosting the XenDesktop user sessions and management VMs necessary to support the VDI infrastructure. These can be hosted either via VMware vSphere or Microsoft Hyper-V hypervisors.

The Storage layer consists of two tiers of internal storage to suit the VDI VMs and management components individually.

2.2 Shared Tier 1

2.2.1 VRTX – Up to 600 Users

For remote or branch office deployment scenarios, Dell Wyse Datacenter offers a 2 or 4 blade cluster plus 25-disk Direct Attached Storage (DAS) solution, all contained within a single 5U chassis. All switching, compute, management, and storage are included. This solution can support up to 500

Page 7: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

4 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

pooled VDI users in an incredibly efficient, small, and cost effective platform. Additional ToR switching is available if required.

2.2.1.1 Shared Tier 1 Rack – Conceptual Network Architecture

All ToR traffic connecting to the VRTX integrated switch should be layer 2/ switched locally, with all layer 3/ routable VLANs trunked from a core or distribution switch. The following diagram illustrates the logical relationship of the VRTX chassis to the integrated switch connections, VLAN assignments, as well as logical VLAN flow in relation to the core switch.

iDR

AC

VLA

N

Mg

mt V

LAN

Compute + Mgmt hosts

Core switch

ToR switch

Trunk

Internal Storage

VD

I VLA

N

Integrated Switch

Page 8: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

5 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

3 Hardware Components

3.1 PowerEdge VRTX

PowerEdge VRTX by Dell is a design that reduces complexities of deployment in remote and branch offices by combining servers, storage, and networking into a consolidated 5U chassis. The VRTX chassis supports a maximum of 4 blade servers and also includes an internally shared storage infrastructure using a SAS based RAID controller (PERC) with 1GB cache across all four blades. An internal Ethernet switch also provides the external connectivity to the client network. This unique packaging eliminates the need for external storage arrays and fabrics for connectivity to the compute nodes.

Page 9: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

6 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

Two “cluster in a box” options for remote/branch office deployment are shown below. This solution leverages Microsoft’s Failover Clustering technology for Hyper-V deployments and VMware’s vSphere HA to protect all VMs across all blades. The storage configuration is optimized to support the best performance and capacity utilization providing two tiers of shared internal storage. The use of hypervisor clustering enables complete mobility for the management and desktop VMs within the cluster during periods of maintenance and migration. This solution is available in either a 2 blade or 4 blade configuration including the required storage. The 2 blade solution requires 15 total SAS disks; the 4 blade solution requires the full 25 disks. The image below describes the blade and disk configuration options.

Page 10: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

7 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

3.2 Network Configuration

The VRTX chassis can be configured with either switched (default) or pass-through modules. The switched method, shown below also the Dell Wyse Datacenter solution default, supports up to 8 external ports for uplinks. This solution configuration only makes use of the single A fabric in default form. External uplinks should be cabled and configured in a LAG to support the desired amount of upstream bandwidth.

3.2.1 Network HA

For configurations requiring networking HA, this can be achieved by adding Broadcom 5720 1Gb NICs to the PCIe slots in the VRTX chassis that will connect to the pre-populated PCIe mezzanine cards in each blade server. This provides an alternative physical network path out of the VRTX chassis for greater bandwidth and redundancy using additional fabrics. A PCIe NIC must be added for each blade in the chassis as these connections are mapped 1:1. As you can see by the graphic below, each M620 in the VRTX chassis will use the 10Gb NDC (throttled to 1Gb) in the A fabric to connect to ports on the internal 1Gb switch. The PCIe mezz cards included in the B fabric will be used to connect to ports provided by the external 1Gb NICs in the PCIe slots of the VRTX chassis (1 per blade).

Page 11: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

8 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

Physical cabling representation:

Logical representation:

3.3 Server Configuration

The PowerEdge M620 is a feature-rich, dual-processor, half-height blade server which offers a blend of density, performance, efficiency and scalability. The M620 offers remarkable computational density, scaling up to 24 cores, 2 socket Intel Xeon processors and 24 DIMMs (768GB RAM) of DDR3 memory in an extremely compact half-height blade form factor.

Shared Tier1 – VRTX – HA

VRTX Chassis

C1 – [open]

C2 – [open]

B1 – PCIe 1Gb NIC

A1 – Internal 1Gb switch10Gb DP NDC - A

[open] Mezz - C

1Gb QP Mezz - B

M620

A2 – Internal 1Gb switch

B2 – PCIe 1Gb NIC

NIC Team/

vSwitch

Page 12: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

9 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

Care has been taken to optimize the server platform for Citrix XenDesktop specifically. CPU is very important in VDI environments and is ultimately the key limiting factor in compute hosts in all scenarios. The more CPU performance you have available, the more users you can host on a single server, generally speaking.

3.4 Optional Networking

3.4.1 Force10 S55 (ToR Switch)

The Dell Force10 S-Series S55 1/10 GbE ToR (Top-of-Rack) switch is optimized for lowering operational costs while increasing scalability and improving manageability at the network edge. Optimized for high-performance data center applications, the S55 is recommended for Dell Wyse Datacenter deployments of 6000 users or less and leverages a non-blocking architecture that delivers line-rate, low-latency L2 and L3 switching to eliminate network bottlenecks. The high-density S55 design provides 48 GbE access ports with up to four modular 10 GbE uplinks in just 1-RU to conserve valuable rack space. The S55 incorporates multiple architectural features that optimize data center network efficiency and reliability, including IO panel to PSU airflow or PSU to IO panel airflow for hot/cold aisle environments, and redundant, hot-swappable power supplies and fans. A “scale-as-you-grow” ToR solution that is simple to deploy and manage, up to 8 Sx 55 switches can be stacked to create a single logical switch by utilizing Dell Force10’s stacking technology and high-speed stacking modules.

Model Features Options Uses

Force10 S55 44 x BaseT (10/100/1000) + 4 x SFP

Redundant PSUs ToR switch for 1Gb LAN connections

4 x 1Gb SFP ports the support copper or fiber

12Gb or 24Gb stacking (up to 8 switches)

2 x modular slots for 10Gb uplinks or stacking modules

Page 13: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

10 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

Guidance:

10Gb uplinks to a core or distribution switch are the preferred design choice using the rear 10Gb uplink modules. If 10Gb to a core or distribution switch is unavailable the front 4 x 1Gb SFP ports can be used.

The front 4 SFP ports can support copper cabling and can be upgraded to optical if a longer run is needed.

For more information on the S55 switch and Dell Force10 networking, please visit: LINK

3.4.1.1 Force10 S55 Stacking

The Top of Rack switches in the Network layer can be optionally stacked with additional switches, if greater port count or redundancy is desired. Each switch will need a stacking module plugged into a rear bay and connected with a stacking cable. The best practice for switch stacks greater than 2 is to cable in a ring configuration with the last switch in the stack cabled back to the first. Uplinks need to be configured on all switches in the stack back to the core to provide redundancy and failure protection.

Please reference the following Force10 whitepaper for specifics on stacking best practices and configuration: LINK

3.5 Dell Wyse Cloud Clients

The following Dell Wyse clients will deliver a superior Citrix user experience and are the recommended choices for this solution.

3.5.1 ThinOS – T10D

T10D sets the standard for thin clients. Providing an exceptional user experience, the T10D features the incredibly fast Dell Wyse

Page 14: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

11 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

ThinOS, for environments in which security is critical—there’s no attack surface to put your data at risk. Boot up in just seconds and log in securely to almost any network. The T10D delivers a superior Citrix VDI user experience, along with usability and management features found in premium thin clients. The T10D delivers outstanding performance based on its dual core system-on-a-chip (SoC) design, and a built-in media processor delivers smooth multimedia, bi-directional audio and Flash playback. Flexible mounting options let you position the T10D vertically or horizontally on your desk, on the wall or behind your display. Using about 7-watts of power in full operation, the T10D creates very little heat for a greener, more comfortable working environment. Link

3.5.2 ThinOS – D10D

Designed for knowledge workers and power users, the new Dell Wyse D10D is a high-performance thin client based on Dell Wyse ThinOS, the virus-immune firmware base designed for optimal thin client security, performance, and ease-of-use. Highly secure, compact and powerful, the D10D combines Dell Wyse ThinOS with a dual-core AMD 1.4 GHz processor and a revolutionary unified graphics engine for an outstanding user experience. The D10D addresses the performance challenges of processing-intensive applications like computer-aided design, multimedia, HD video and 3D modeling. Scalable enterprise-wide on-premise or cloud-based management provides simple deployment, patching and updates. Take a unit from box to productivity in minutes with auto configuration. Delivering outstanding processing speed and power,

security and display performance, the D10D offers a unique combination of performance, efficiency, and affordability. For more information, please visit: Link

3.5.3 Windows Embedded 7 – Z90Q7

The Dell Wyse Z90Q7 is a super high-performance Windows Embedded Standard 7 thin client for virtual desktop environments. Featuring a quad-core AMD processor and an integrated graphics engine that significantly boost performance, the Z90Q7 achieves exceptional speed and power for the most demanding VDI and embedded Windows applications, rotational 3D graphics, 3D simulation and modeling, unified communications, and multi-screen HD multimedia. Take a unit from box to productivity in minutes. Just select the desired configuration and the Z90Q7 does the rest automatically—no need to reboot. Scale to tens of thousands of endpoints with Dell Wyse WDM software or leverage your existing Microsoft System Center Configuration Manager platform. The Z90Q7 is the thin client for power users who need workstation-class performance on their desktop or within a desktop virtualization environment (x86 or x64). For more information, please visit: Link

3.5.4 Windows Embedded 8 – Z90Q8

Dell Wyse Z90Q8 is a super high-performance Windows Embedded 8 Standard thin client for virtual desktop environments. Featuring a quad-core AMD processor, the Z90Q8 offers a vibrant Windows 8 experience and achieves exceptional speed and power for the most demanding embedded Windows applications, rich 3D graphics and HD multimedia. And you can scale to tens of thousands of Z90Q8 endpoints with Dell Wyse Device Manager (WDM) software, or leverage your existing Microsoft System Center Configuration Manager platform. With single-touch or multi-touch capable displays, the Z90Q8 adds the ease of an intuitive touch user experience. The Z90Q8 is an ideal thin client for offering a high-performance Windows 8 experience with the most demanding mix of virtual desktop or cloud applications (x86 or x64). For more information please visit: Link

3.5.5 Suse Linux – Z50D

Designed for power users, the new Dell Wyse Z50D is the highest performing thin client on the market. Highly secure and ultra-powerful, the Z50D combines Dell Wyse-enhanced SUSE Linux Enterprise with a dual-core

Page 15: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

12 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

AMD 1.65 GHz processor and a revolutionary unified engine for an unprecedented user experience. The Z50D eliminates performance constraints for high-end, processing-intensive applications like computer-aided design, multimedia, HD video and 3D modeling. Scalable enterprise-wide management provides simple deployment, patching and updates. Take a unit from box to productivity in minutes with auto configuration. Delivering unmatched processing speed and power, security and display performance, it’s no wonder no other thin client can compare. For more information, please visit: Link

3.5.6 Dell Wyse Zero – Xenith 2

Establishing a new price/performance standard for zero clients for Citrix, the new Dell Wyse Xenith 2 provides an exceptional user experience at a highly affordable price for Citrix XenDesktop and XenApp environments. With zero attack surface, the ultra-secure Xenith 2 offers network-borne viruses and malware zero target for attacks. Xenith 2 boots up in just seconds and delivers exceptional performance for Citrix XenDesktop and XenApp users while offering usability and management features found in premium Dell Wyse cloud client devices. Xenith 2 delivers outstanding performance based on its system-on-chip (SoC)

design optimized with its Dell Wyse zero architecture and a built-in media processor delivers smooth multimedia, bi-directional audio and Flash playback. Flexible mounting options let you position Xenith 2 vertically or horizontally. Using about 7 Watts of power in full operation, the Xenith 2 creates very little heat for a greener working environment. For more information, please visit: Link

3.5.7 Dell Wyse Zero – Xenith Pro 2

Dell Wyse Xenith Pro 2 is the next-generation zero client for Citrix HDX and Citrix XenDesktop, delivering ultimate performance, security and simplicity. With a powerful dual core AMD G-series CPU, Xenith Pro 2 is faster than competing devices. This additional computing horsepower allows dazzling HD multimedia delivery without overtaxing your server or network. Scalable enterprise-wide management provides simple deployment, patching and updates—your Citrix XenDesktop server configures it out-of-the-box to your preferences for plug-and-play speed and ease of use. Virus and malware immune, the Xenith Pro 2 draws under 9 watts of power in full operation—that’s less than any PC on the planet. For more information please visit: Link

3.5.8 Dell Wyse Cloud Connect

Designed to promote bring-your-own-device (BYOD) environments, Dell Wyse Cloud Connect allows you to securely access and share work and personal files, presentations, applications and other content from your business or your home. Managed through Dell Wyse Cloud Client Manager software-as-a-service (SaaS), IT managers can ensure that each Cloud Connect device is used by the appropriate person with the right permissions and access to the appropriate apps and content based on role, department and location. Slightly larger than a USB memory stick, Cloud Connect is an ultra-compact multimedia-capable device. Simply plug it into any available Mobile High-Definition Link (MHL) / HDMI port on a TV or monitor, attach a Bluetooth keyboard and mouse, and

you’re off and running. Easy to slip into your pocket or bag, it enables an HD-quality window to the cloud, great for meetings and presentations while on business travel, or for cruising the internet and checking email after a day of work. For more information, please visit: Link

Page 16: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

13 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

3.5.9 Dell Venue 11 Pro

Meet the ultimate in productivity, connectivity and collaboration. Enjoy full laptop performance in an ultra-portable tablet that has unmatched flexibility for a business in motion. This dual purpose device works as a tablet when you're out in the field but also enables you to work on your desktop in the office thanks to an optional dock. For more information, please visit: Link

3.5.10 Dell Chromebook 11

The lightweight, easy-to-use Dell Chromebook 11 helps turn education into exploration - without the worries of safety or security. Priced to make 1:1 computing affordable today, Chromebook 11 is backed by Dell support services to make the most of your budget for years to come. The Chrome OS and Chrome browser get students online in an instant and loads web pages in seconds. A high-density battery supported by a 4th Gen Intel® processor provides up to 10 hours of power. Encourage creativity with the Chromebook 11 and its multimedia features that include an 11.6" screen, stereo sound and webcam. For more information, please visit: Link

Page 17: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

14 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

4 Software Components

4.1 Citrix XenDesktop

The solution is based on Citrix XenDesktop 7.1 which provides a complete end-to-end solution delivering Microsoft Windows virtual desktops or server-based hosted shared sessions to users on a wide variety of endpoint devices. Virtual desktops are dynamically assembled on demand, providing users with pristine, yet personalized, desktops each time they log on.

Citrix XenDesktop provides a complete virtual desktop delivery system by integrating several distributed components with advanced configuration tools that simplify the creation and real-time management of the virtual desktop infrastructure.

The core XenDesktop components include:

● Studio

― Studio is the management console that enables you to configure and manage your deployment, eliminating the need for separate management consoles for managing delivery of applications and desktops. Studio provides various wizards to guide you through the process of setting up your environment, creating your workloads to host applications and desktops, and assigning applications and desktops to users.

● Director

― Director is a web-based tool that enables IT support teams to monitor an environment, troubleshoot issues before they become system-critical, and perform support tasks for end users. You can also view and interact with a user's sessions using Microsoft Remote Assistance.

Page 18: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

15 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

● Receiver

― Installed on user devices, Citrix Receiver provides users with quick, secure, self-service access to documents, applications, and desktops from any of the user's devices including smartphones, tablets, and PCs. Receiver provides on-demand access to Windows, Web, and Software as a Service (SaaS) applications.

● Delivery Controller (DC)

― Installed on servers in the data center, the controller authenticates users, manages the assembly of users’ virtual desktop environments, and brokers connections between users and their virtual desktops.

● StoreFront

― StoreFront authenticates users to sites hosting resources and manages stores of desktops and applications that user’s access.

● License Server

― The Citrix License Server is an essential component at any Citrix-based solution. Every Citrix product environment must have at least one shared or dedicated license server. License servers are computers that are either partly or completely dedicated to storing and managing licenses. Citrix products request licenses from a license server when users attempt to connect.

● Machine Creation Services (MCS)

― A collection of services that work together to create virtual servers and desktops from a master image on demand, optimizing storage utilization and providing a pristine virtual machine to users every time they log on. Machine Creation Services is fully integrated and administrated in Citrix Studio.

● Virtual Delivery Agent (VDA)

― The Virtual Desktop Agent is a transparent plugin that is installed on every virtual desktop or XenApp host (RDSH) and enables the direct connection between the virtual desktop and users’ endpoint devices.

4.1.1 Machine Creation Services (MCS)

Citrix Machine Creation Services is the native provisioning mechanism within Citrix XenDesktop for virtual desktop image creation and management. Machine Creation Services uses the hypervisor APIs to create, start, stop, and delete virtual desktop images. Desktop images are organized in a Machine Catalog and within that catalog there are a number of options available to create and deploy virtual desktops:

Random: Virtual desktops are assigned randomly as users connect. When they logoff, the desktop is reset to its original state and made free for another user to login and use. Any changes made by the user are discarded at log off.

Static: Virtual desktops are assigned to the same user every time with three options for how to handle changes made to the desktop: Store on local vDisk, Personal vDisk, or discarded on user log off.

All the desktops in a random or static catalog are based off a master desktop template which is selected during the catalog creation process. MCS then takes snapshots of the master template and layers two additional virtual disks on top: an Identity vDisk and a Difference vDisk. The Identity vDisk includes all the specific desktop identity information such as host names and passwords. The

Page 19: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

16 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

Difference vDisk is where all the writes and changes to the desktop are stored. These Identity and Difference vDisks for each desktop are stored on the same data store as their related clone.

While traditionally used for small to medium sized XenDesktop deployments, MCS can bring along with it some substantial Tier 1 storage cost savings because of the snapshot/identity/difference disk methodology. The Tier 1 disk space requirements of the identity and difference disks when layered on top of a master image snapshot, is far less than that of a dedicated desktop architecture.

4.1.2 Citrix Personal vDisk

Technology

Citrix Personal vDisk is a high-performance enterprise workspace virtualization solution that is built right into Citrix XenDesktop and provides the user customization and personalization benefits of a persistent desktop image, with the storage savings and performance of a single/shared image.

With Citrix Personal vDisk, each user receives personal storage in the form of a layered vDisk, which enables them to personalize and persist their desktop environment.

Additionally, this vDisk stores any user or departmental apps as well as any data or settings the VDI administrator chooses to store. Personal vDisk provides the following benefits to XenDesktop;

● Persistent personalization of user profiles, settings and data.

● Enables deployment and management of user installed and entitlement based applications

● Fully compatible with Application delivery solutions such as Microsoft SCCM, App-V and Citrix XenApp.

● 100% persistence with VDI pooled Storage management

● Near Zero management overhead.

MCS Virtual Desktop Creation

Static Desktops

Identity Disk

Read-Only Clone

Difference Disk

Static Machine Catalog

Master Image

Identity Disk

Read-Only Clone

Difference Disk

(deleted at log off)

Random Machine Catalog

Private Snaphot

Private Snapshot Base OS Disk

Difference Disk

Identity Disk

Base OS Disk

Difference Disk

Identity Disk

Mac

hin

e C

reat

ion

Ser

vice

s

Static Desktops w/ PvD

Base OS Disk

Difference Disk

Identity Disk

Personal vDisk

Random Desktops

XenDesktop VDI Image Layer Management

Common Base OS Image

User Workspace

Citrix Profile Management

Citrix Personal vDisk Technology

User DataCorporate Installed

Apps

User Settings

User Installed

Apps

Page 20: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

17 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

4.1.3 Citrix Profile Manager

Citrix Profile Management is a component of the XenDesktop suite which is used to manage user profiles and minimize many of the issues associated with traditional Windows Roaming profiles in an environment where users may have their user profile open on multiple devices at the same time. The profile management toolset has two components, the profile management agent which is installed on any device where the user profiles will be managed by the toolset, which will be the virtual desktops. The second component is a Group Policy Administrative Template, which is imported to a group policy which is assigned to an organizational unit within active directory which contains the devices upon which the user profiles will be managed.

In order to further optimize the profile management folders within the user profile that can be used to store data will be redirected the users’ home drive. The folder redirection will be managed via group policy objects within Active Directory. The following folders will be redirected:

● Contacts

● Downloads

● Favorites

● Links

● My Documents

● Searches

● Start Menu

● Windows

● My Music

● My Pictures

● My Videos

● Desktop

4.1.4 SQL Databases

The Citrix and VMware databases will be hosted by a single dedicated SQL 2012 Server VM in the Management layer. Use caution during database setup to ensure that SQL data, logs, and TempDB are properly separated onto their respective volumes. Create all Databases that will be required for:

Citrix XenDesktop

vCenter or SCVMM

Initial placement of all databases into a single SQL instance is fine unless performance becomes an issue, in which case database need to be separated into separate named instances. Enable auto-growth for each DB.

Best practices defined by Citrix, Microsoft and VMware are to be adhered to, to ensure optimal database performance.

4.1.5 DNS

DNS plays a crucial role in the environment not only as the basis for Active Directory but will be used to control access to the various Citrix and Microsoft software components. All hosts, VMs, and consumable software components need to have a presence in DNS, preferably via a dynamic and AD-integrated namespace. Microsoft best practices and organizational requirements are to be adhered to.

Pay consideration for eventual scaling, access to components that may live on one or more servers (SQL databases, Citrix services) during the initial deployment. Use CNAMEs and the round robin DNS

Page 21: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

18 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

mechanism to provide a front-end “mask” to the back-end server actually hosting the service or data source.

4.1.6 NetScaler Load Balancing

Depending on which management components are to be made highly available, the use of a load balancer may be required. The following management components require the use of a load balancer to function in a high availability mode:

― StoreFront Servers

― Licensing Server

― XenDesktop XML Service

― XenDesktop Desktop Director

Dell recommends the Citrix NetScaler VPX for load balancing the Dell Wyse Datacenter for Citrix solution.

4.2 VDI Hypervisor Platforms

4.2.1 VMware vSphere 5

VMware vSphere 5 is a virtualization platform used for building VDI and cloud infrastructures. vSphere 5 represents a migration from the ESX architecture to the ESXi architecture.

VMware vSphere 5 includes three major layers: Virtualization, Management and Interface. The Virtualization layer includes infrastructure and application services. The Management layer is central for configuring, provisioning and managing virtualized environments. The Interface layer includes the vSphere client and the vSphere web client.

Throughout the Dell Wyse Datacenter solution, all VMware and Microsoft best practices and prerequisites for core services are adhered to (NTP, DNS, Active Directory, etc.). The vCenter 5 VM used in the solution is a single Windows Server 2008 R2 VM or vCenter 5 virtual appliance, residing on a host in the management Tier. SQL server is a core component of the Windows version of vCenter and is hosted on another VM also residing in the management Tier. It is recommended that

Netscaler Load Balancing Options

Virtual Desktop Pool

Storage

VM 3

VM 1Delivery Controller

Delivery Controller

Delivery Controller

VM 2

XenDesktop Farm

StoreFront

StoreFront

Tier 1 Array

Tier 2 Array

NAS

Provisioning Server

Provisioning Server

Netscaler HA pair

Netscaler HA pair

Netscaler HA pair

Page 22: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

19 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

all additional XenDesktop components be installed in a distributed architecture, one role per server VM.

4.2.2 Microsoft Windows Server 2012 R2 Hyper-V

Windows Server 2012 R2 Hyper-V ™ is a powerful virtualization technology that enables businesses to leverage the benefits of virtualization. Hyper-V reduces costs, increases hardware utilization, optimizes business infrastructure, and improves server availability. Hyper-V works with virtualization-aware hardware to tightly control the resources available to each virtual machine. The latest generation of Dell servers includes virtualization-aware processors and network adapters.

From a network management standpoint, virtual machines are much easier to manage than physical computers. To this end, Hyper-V includes many management features designed to make managing virtual machines simple and familiar, while enabling easy access to powerful VM-specific management functions. The primary management platform within a Hyper-V based XenDesktop virtualization environment is Microsoft Systems Center Virtual Machine Manager SP1 (SCVMM).

SCVMM provides centralized and powerful management, monitoring, and self-service provisioning for virtual machines. SCVMM host groups are a way to apply policies and to check for problems across several VMs at once. Groups can be organized by owner, operating system, or by custom names such as “Development” or “Production”. The interface also incorporates Remote Desktop Protocol (RDP); double-click a VM to bring up the console for that VM—live and accessible from the management console.

4.3 Citrix NetScaler

Citrix NetScaler is an all-in-one web application delivery controller that makes applications run five times better, reduces web application ownership costs, optimizes the user experience, and makes sure that applications are always available by using:

Proven application acceleration such as HTTP compression and caching

High application availability through advanced L4-7 load balancer

Application security with an integrated AppFirewall

Server offloading to significantly reduce costs and consolidate servers

Where Does a Citrix NetScaler Fit in the Network?

A NetScaler appliance resides between the clients and the servers, so that client requests and server responses pass through it. In a typical installation, virtual servers (vservers) configured on the NetScaler provide connection points that clients use to access the applications behind the NetScaler. In this case, the NetScaler owns public IP addresses that are associated with its vservers, while the real servers are isolated in a private network. It is also possible to operate the NetScaler in a transparent mode as an L2 bridge or L3 router, or even to combine aspects of these and other modes. NetScaler can also be used to host the StoreFront function eliminating complexity from the environment.

Page 23: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

20 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

Global Server Load Balancing

GSLB is an industry standard function. It is in widespread use to provide automatic distribution of user requests to an instance of an application hosted in the appropriate data center where multiple processing facilities exist. The intent is to seamlessly redistribute load on an as required basis, transparent to the user community. This distribution are used on a localized or worldwide basis. Many companies use GSLB in its simplest form. They use the technology to automatically redirect traffic to Disaster Recovery (DR) sites on an exception basis. That is, GSLB is configured to simply route user load to the DR site on a temporary basis only in the event of a catastrophic failure or only during extended planned data center maintenance. GSLB is also used to distribute load across data centers on a continuous load balancing basis as part of normal processing.

XenDesktop HA with Netscaler White Paper: Link

Several of the management components of the XenDesktop stack are made highly-available using NetScaler to load balance traffic. The following management components require the use of a load balancer to function in a high availability mode:

― StoreFront Servers

― Licensing Server

― XenDesktop XML Service

― XenDesktop Desktop Director

― Provisioning Services TFTP Service

Page 24: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

21 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

Citrix NetScaler are added to the Dell Wyse Datacenter mgmt. stack at any time and runs on the existing server infrastructure.

4.4 Citrix CloudBridge

Citrix CloudBridge provides a unified platform that connects and accelerates applications, and optimizes bandwidth utilization across public cloud and private networks. The only WAN optimization solution with integrated, secure, transparent cloud connectivity, CloudBridge allows enterprises to augment their data center with the infinite capacity and elastic efficiency provided by public cloud providers. CloudBridge delivers superior application performance and end-user experiences through a broad base of features, including:

Market-leading enhancements for the Citrix XenDesktop user experience including HDX WAN optimization

Secure, optimized networking between clouds

Compression, de-duplication and protocol acceleration

Acceleration of traditional enterprise applications

Sophisticated traffic management controls and reporting

Faster storage replication times and reduced bandwidth demands

Integrated video delivery optimization to support increasing video delivery to branch offices

Deliver a faster experience for all users

CloudBridge is ICA aware and enables IT organizations to accelerate, control and optimize all services – desktops, applications, multimedia and more – for corporate office, branch office and mobile users while dramatically reducing costs. With CloudBridge, branch office users experience a better desktop experience with faster printing, file downloads, video streaming and application start-up times.

Netscaler Load Balancing Options

Virtual Desktop Pool

Storage

VM 3

VM 1Delivery Controller

Delivery Controller

Delivery Controller

VM 2

XenDesktop Farm

StoreFront

StoreFront

Tier 1 Array

Tier 2 Array

NAS

Provisioning Server

Provisioning Server

Netscaler HA pair

Netscaler HA pair

Netscaler HA pair

Page 25: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

22 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

For more information please visit: Link

Page 26: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

23 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

5 Solution Architecture for XenDesktop 7

5.1 Hyper-V

5.1.1 Storage Architecture Overview

The VRTX chassis contains up to 25 available 2.5” SAS disks to be shared with each server blade in the cluster. Each blade will require its own 300GB disk pair for the Windows Server 2012 R2 OS.

Solution Model Features Tier 1 Storage (VDI disks) Tier Storage (mgmt. + user data)

2 Blade Up to 300 desktops 10 x 300GB 2.5” 15K SAS 5 x 900GB 2.5” 10K SAS

4 Blade Up to 600 desktops 20 x 300GB 2.5” 15K SAS 5 x 900GB 2.5” 10K SAS

VRTX solution volume configuration:

Volumes Size (GB)

RAID Storage Array

Purpose File System

VDI 1024 10 Tier 1 VDI Desktops CSVFS

Management 200 5 or 6 Tier 2 XD VMs, Hypervisor Mgmt CSVFS

User Data 2048 5 or 6 Tier 2 File Server CSVFS

User Profiles 20 5 or 6 Tier 2 User profiles PTD

SQL DATA 100 5 or 6 SQL SQL CVSFS

SQL LOGS 100 5 or 6 SQL SQL CVSFS

TempDB Data

5 5 or 6 SQL SQL CVSFS

TempDB Logs

5 5 or 6 SQL SQL CVSFS

Quorum 500MB 5 or 6 Tier 2 Cluster CSVFS

Templates/ ISO

200 5 or 6 Tier 2 ISO/ gold image storage (optional)

CSVFS

5.1.2 Compute + Management Infrastructure

The DVS Enterprise ROBO solution using VRTX consists of 2 or 4 nodes, clustered, sharing both management and VDI session hosting responsibilities across all available nodes. The virtual desktop configuration based on workload type is summarized below and should be adjusted as appropriate:

User Type vCPU Startup RAM (GB)

Dynamic Memory NIC OS + Data

vDisk (GB) Min|Max Buffer Weight

Standard 1 1 1GB | 2GB 20% Med 1 40

Page 27: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

24 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

User Type vCPU Startup RAM (GB)

Dynamic Memory NIC OS + Data

vDisk (GB) Min|Max Buffer Weight

Enhanced 2 1.5 1GB | 3GB 20% Med 1 40

Professional 2 2 2GB | 4GB 20% Med 1 40

The Management role requirements for the base solution are summarized below. Use data disks for role-specific application files and data such as, logs and IIS web files in the Management volume.

Two management role VMs at a minimum are required to support a XenDesktop environment on VRTX and should be spread amongst the available nodes. The values represented below are suggested based on our testing and validation but can be further consolidated if desired. These can be adjusted as appropriate:

Role vCPU Startup RAM (GB)

Dynamic Memory NIC OS + Data

vDisk (GB)

Tier 2 Volume (GB)

Min|Max Buffer Weight

DDC + Storefront + Lic

2 2 1GB|8GB 20% Med 1 40 + 10 -

SQL 4 2 1GB|8GB 20% Med 1 40 + 10 150

SCVMM 2 2 1GB|8GB 20% Med 1 40 + 10 200

File 1 2 1GB|2GB 20% Med 1 40 + 10 2048

5.1.2.1 Management High Availability

XenDesktop and SCVMM configurations are stored in SQL which can be further protected via an optional SQL mirror or AlwaysOn Failover Cluster Instance.

The following will protect each of the critical infrastructure components in the solution:

All blade servers will be configured in a Hyper-V cluster (Node and Disk Majority).

All storage volumes will be configured as Cluster Shared Volume (CSV) so all hosts in the cluster can read and write to the same volumes.

SQL Server mirroring is configured with a witness to further protect SQL.

Page 28: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

25 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

Please refer to this Citrix document for more information: LINK

The following article details the step-by-step mirror configuration: LINK

Additional resources can be found in TechNet: LINK1 and LINK2

5.1.3 Virtual Networking

The Hyper-V configuration utilizes native Windows Server 2012 NIC Teaming to load balance and provide resiliency for network connections. One consolidated NIC team should be configured for the Hyper-V switch for use by both the desktop VMs and the management OS.

Native Windows Server 2012 NIC Teaming is leveraged in this solution design to maximize the available network connections and throughput. A single NIC team, by default, configured for use with a single Hyper-V switch is all that is required to serve the desktop VMs and the various functions of the Management OS. The Management OS will share the Hyper-V Switch through the use of vNICs created via PowerShell. These individual vNICs can then be assigned specific VLANs used to segment the pertinent operations of the Management OS, such as failover cluster heartbeating and live migration. To guarantee sufficient bandwidth for a given function or vNIC, the QOS feature should be used as well.

Compute + Management hosts

o Management VLAN: Configured for hypervisor and broker management traffic – L3

routed via core switch

o VDI VLAN: Configured for VDI session traffic – L3 routed via core switch

o Cluster/ CSV VLAN: Configured for Failover Cluster and Cluster Shared Volume

communications – L2 switched only.

o Live Migration VLAN: Configured for Live Migration traffic – L2 switched only, trunked

from Core

o An optional iDRAC VLAN should be configured for the VRTX iDRAC traffic – L3 routed

via core switch

Page 29: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

26 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

Per host Hyper-V switch/ teaming configuration:

5.1.3.1 Virtual Networking HA

As discussed in section 3.2.1 above, additional bandwidth and redundancy can be achieved by adding NICs to both the VRTX chassis as well as the B fabric of each M620 blade. These additional NICs in each blade can then be added to the NIC team, providing 4 total interfaces for failover and traffic load balancing across 2 physical external sets of interfaces.

XD Roles Desktop VMs

1Gb Integrated switch

VLA

N 20

ToR/ Core

2Gb

NIC Team – LAN

pNIC 1 pNIC 2

Management OS

Cluster/ CSV

Live Migration

MGMT

vNICvNIC

Hyper-V Switch

vNIC

vNIC

vNIC

vNIC vNIC

vNIC

SQL

XD Roles Desktop VMs SQL

1Gb Integrated switch

VLA

N 20

ToR/ Core

4Gb

NIC Team – LAN

pNIC 1 pNIC 2

Management OS

Cluster/ CSV

Live Migration

MGMT

vNICvNIC

Hyper-V Switch

vNIC

vNIC

vNIC

vNIC vNIC

vNIC

pNIC 3 pNIC 4

1Gb PCIe NICs

Page 30: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

27 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

5.2 vSphere

5.2.1 Compute + Management Server Infrastructure

The Dell Wyse Datacenter ROBO solution using VRTX consists of 2 or 4 nodes, clustered, sharing both management and VDI session hosting responsibilities across all available nodes. Please note that the PCIe mezzanine cards for the B and C fabrics are included in the M620 for VRTX. These cards must not be replaced for Ethernet or FC variants, only the prepopulated PCIe mezz cards are supported. The M620 blade server for VRTX is configured with the following specifications for vSphere and Hyper-V (coming soon) respectively.

Shared Tier 1 Compute + Mgmt Host – PowerEdge M620

2 x Intel Xeon E5-2690v2 Processor (3Ghz)

OR

2 x Intel Xeon E5-2690v2 Processor (3Ghz)

256GB Memory (16 x 16GB DIMMs @ 1600Mhz) 256GB Memory (16 x 16GB DIMMs @ 1600Mhz)

VMware vSphere on 2 x 1GB internal SD Microsoft Hyper-V on 2 x 300GB 15K SAS disks

Broadcom 57810-k 1Gb/ 10Gb DP KR NDC Broadcom 57810-k 1Gb/ 10Gb DP KR NDC

PCIe mezz cards for fabric B and C (included) PCIe mezz cards for fabric B and C (included)

iDRAC7 Enterprise w/ vFlash, 8GB SD iDRAC7 Enterprise w/ vFlash, 8GB SD

The Management role requirements for the base solution are summarized below. Use data disks for role-specific application files and data such as, logs and IIS web files in the Management volume.

Two management role VMs at a minimum are required to support a XenDesktop environment on VRTX and should be spread amongst the available nodes. The values represented below are suggested based on our testing and validation. These should be adjusted as appropriate:

Role vCPU vRAM (GB)

vRAM Reservation

NIC OS + Data

vDisk (GB)

Tier 2 Volume (GB)

DDC + Storefront + Lic + File

2 8 4 1 40 + 10 2048

SQL + vCenter 2 8 4 1 40 + 10 50

The virtual desktop configuration based on workload type is summarized below and should be adjusted as appropriate:

User Type vCPU vRAM (GB)

vRAM Reservation

NIC OS + Data

vDisk (GB)

Standard 1 2 1 1 40

Enhanced 2 3 1.5 1 40

Professional 2 4 2 1 40

Page 31: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

28 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

5.2.1.1 Management High Availability

XenDesktop and vCenter configurations are stored in SQL which can be further protected via an optional SQL mirror or AlwaysOn Failover Cluster Instance.

The following will protect each of the critical infrastructure components in the solution:

All blade servers will be configured in a Hyper-V cluster (Node and Disk Majority).

All storage volumes will be configured as Cluster Shared Volume (CSV) so all hosts in the cluster can read and write to the same volumes.

SQL Server mirroring is configured with a witness to further protect SQL.

Please refer to this Citrix document for more information: LINK

The following article details the step-by-step mirror configuration: LINK

Additional resources can be found in TechNet: LINK1 and LINK2

5.2.2 Storage Architecture Overview

The VRTX chassis contains up to 25 available 2.5” SAS disks to be shared with each server blade in the cluster.

Solution Model Features Tier 1 Storage (VDI disks) Tier Storage (mgmt. + user data)

2 Blade Up to 250 desktops 10 x 300GB 2.5” 15K SAS 5 x 900GB 2.5” 10K SAS

4 Blade Up to 500 desktops 20 x 300GB 2.5” 15K SAS 5 x 900GB 2.5” 10K SAS

Page 32: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

29 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

VRTX solution volume configuration:

Volumes Size (GB)

RAID Disk Pool Purpose File System

VDI 1024 10 Tier 1 VDI Desktops VMFS

Management 200 5 or 6 Tier 2 XD VMs, Hypervisor Mgmt VMFS

User Data 2048 5 or 6 Tier 2 File Server RDM

User Profiles 20 5 or 6 Tier 2 User profiles VMFS

SQL DATA 100 5 or 6 Tier 2 SQL VMFS

SQL LOGS 100 5 or 6 Tier 2 SQL VMFS

TempDB Data

5 5 or 6 Tier 2 SQL VMFS

TempDB Logs

5 5 or 6 Tier 2 SQL VMFS

Templates/ ISO

200 5 or 6 Tier 2 ISO/ gold image storage (optional)

VMFS

5.2.3 Virtual Networking

The vSphere configuration utilizes the built-in vSwitch capabilities of ESXi to load balance and provide resiliency for network connections. One consolidated vSwitch should be configured for use by both the desktop VMs and ESXi management. The following outlines the recommended VLANs for use in the solution:

Compute + Management hosts

o Management VLAN: Configured for hypervisor and broker management traffic – L3

routed via core switch

o VDI VLAN: Configured for VDI session traffic – L3 routed via core switch

o Live Migration VLAN: Configured for Live Migration traffic – L2 switched only, trunked

from Core

o An optional iDRAC VLAN should be configured for the VRTX iDRAC traffic – L3 routed

via core switch

Per host ESXi vSwitch configuration:

XD Roles Desktop VMs

1Gb Integrated switch

VLAN 20

ToR/ Core

pNIC 1 pNIC 2

Port Groups

VDIvMotion

vSphere vSwitch

vNIC vNIC

vNIC

VLAN 10

vNICvNIC

MGMT

SQL

Page 33: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

30 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

5.2.3.1 Virtual Networking HA

As discussed in section 3.2.1 above, additional bandwidth and redundancy can be achieved by adding NICs to both the VRTX chassis as well as the B fabric of each M620 blade. These additional NICs in each blade can then be added to the NIC team, providing 4 total interfaces for failover and traffic load balancing across 2 physical external sets of interfaces.

XD Roles Desktop VMs

1Gb Integrated switch

VLA

N 2

0

ToR/ Core

pNIC 1 pNIC 2

Port Groups

VDIvMotion

vSphere vSwitch

vNIC vNIC

vNIC

VLA

N 1

0

vNICvNIC

MGMT

1Gb PCIe NICs

pNIC 3 pNIC 4

SQL

Page 34: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

31 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

5.3 XenDesktop on VRTX Communication Flow

Windows Master Image

Delivery Controller (MCS)

SSL

File Server

Citrix NetScaler

License Server

StoreFront

Internet/ LAN

Use

r Data (S

MB

)

LD

AP

MCS Machine Catalog

XM

L

SQLServer

Active Directory

Internal T1 + T2

TC

P/2

70

00

TCP/1433

vCenter/ SCVMM

HTTPS/ TCP 8100

ICA

/ HD

X

VD

A

LDAP

VDA

VDA

VDA

Static or Random Virtual Desktops

Hyper-V or vSphere Mgmt + Compute Hosts

Page 35: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

32 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

6 Solution Performance and Testing

For a full explanation of the Dell Wyse Solution Engineering Performance Analysis and Characterization process, please refer to section 7 of the core solution Reference Architecture for Dell Wyse Datacenter for Citrix XenDesktop: Link

6.1 Load Generation and Monitoring

6.1.1 Login VSI – Login Consultants

Login VSI is the de-facto industry standard tool for testing VDI environments and server-based computing / terminal services environments. It installs a standard collection of desktop application software (e.g. Microsoft Office, Adobe Acrobat Reader etc.) on each VDI desktop; it then uses launcher systems to connect a specified number of users to available desktops within the environment. Once the user is connected the workload is started via a logon script which starts the test script once the user environment is configured by the login script. Each launcher system can launch connections to a number of 'target' machines (i.e. VDI desktops), with the launchers being managed by a centralized management console, which is used to configure and manage the Login VSI environment.

6.1.2 Liquidware Labs Stratusphere UX

Stratusphere UX was used during each test run to gather data relating to User Experience and desktop performance. Data was gathered at the Host and Virtual Machine layers and reported back to a central server (Stratusphere Hub). The hub was then used to create a series of “Comma Separated Values” (.csv) reports which have then been used to generate graphs and summary tables of key information. In addition the Stratusphere Hub generates a magic quadrant style scatter plot showing the Machine and IO experience of the sessions. The Stratusphere hub was deployed onto the core network therefore its monitoring did not impact the servers being tested. This core network represents an existing customer environment and also includes the following services:

● Active Directory

● DNS

● DHCP

● Anti-Virus

Stratusphere UX calculates the User Experience by monitoring key metrics within the Virtual Desktop environment, the metrics and their thresholds are shown in the following screen shot:

Page 36: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

33 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

6.1.3 VMware vCenter

VMware vCenter has been used for VMware vSphere-based solutions to gather key data (CPU, Memory and Network usage) from each of the desktop hosts during each test run. This data was exported to .csv files for each host and then consolidated to show data from all hosts. While the report does not include specific performance metrics for the Management host servers, these servers were monitored during testing and were seen to be performing at an expected performance level.

6.1.4 Microsoft Perfmon

Microsoft Perfmon was utilized to collect performance data for tests performed on the Hyper-V platform.

6.2 Testing and Validation

6.2.1 Testing Process

The purpose of the single server testing is to validate the architectural assumptions made around the server stack. Each user load is tested against 4 runs. A pilot run to validate that the infrastructure is functioning and valid data can be captured and 3 subsequent runs allowing correlation of data. Summary of the test results will be listed out in the below mentioned tabular format.

At different stages of the testing the testing team will complete some manual “User Experience” Testing while the environment is under load. This will involve a team member logging into a session during the run and completing tasks similar to the User Workload description. While this experience will be subjective, it will help provide a better understanding of the end user experience of the desktop sessions, particularly under high load, and ensure that the data gathered is reliable.

Login VSI has two modes for launching user’s sessions;

● Parallel

― Sessions are launched from multiple launcher hosts in a round robin fashion; this mode is recommended by Login Consultants when running tests against multiple host servers. In parallel mode the VSI console is configured to launch a number of sessions over a specified time period (specified in seconds)

● Sequential

― Sessions are launched from each launcher host in sequence; sessions are only started from a second host once all sessions have been launched on the first host- this is

Page 37: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

34 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

repeated for each launcher host. Sequential launching is recommended by Login Consultants when testing a single desktop host server. The VSI console is configured to launch a specific number of session at a specified interval specified in seconds

All test runs which involved the 6 desktop hosts were conducted using the Login VSI “Parallel Launch” mode, all sessions were launched over an hour to try and represent the typical 9am logon storm. Once the last user session has connected, the sessions are left to run for 15 minutes prior to the sessions being instructed to logout at the end of the current task sequence, this allows every user to complete a minimum of two task sequences within the run before logging out. The single server test runs were configured to launch user sessions every 60 seconds, as with the full bundle test runs sessions were left to run for 15 minutes after the last user connected prior to the sessions being instructed to log out.

Page 38: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

35 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

6.2.2 Dell Wyse Datacenter Workloads and Profiles

It’s important to understand user workloads and profiles when designing a desktop virtualization solution in order to understand the density numbers that the solution can support. At Dell, we use five workload / profile levels, each of which is bound by specific metrics and capabilities

6.2.2.1 Dell Wyse Datacenter Profiles

The table shown below presents the profiles used during PAAC on Dell Wyse Datacenter solutions. These profiles have been carefully selected to provide the optimal level of resources for common use cases.

Profile Name Number of vCPUs per Virtual Desktop

Nominal RAM (GB) per Virtual Desktop

Use Case

Standard 1 2 Task Worker

Enhanced 2 3 Knowledge Worker

Professional 2 4 Power User

Shared Graphics 2 + Shared GPU 3 Knowledge Worker with high graphics consumption requirements

Pass-through Graphics 4 + Pass-through GPU 32 Workstation type user e.g. producing complex 3D models.

6.2.2.2 Dell Wyse Datacenter Workloads

Load-testing on each of the profiles described in the above table is carried out using an appropriate workload that is representative of the relevant use case. In the case of the non-graphics workloads, these workloads are Login VSI workloads and in the case of graphics workloads, these are specially designed workloads that stress the VDI environment to a level that is appropriate for the relevant use case. This information is summarized in the table below.

Profile Name Workload OS Image

Standard Login VSI Light Shared

Enhanced Login VSI Medium Shared

Professional Login VSI Heavy Shared + Profile Virtualization

Shared Graphics Fishbowl / eFigures Shared + Profile Virtualization

Pass-through Graphics eFigures / AutoCAD - SPEC Viewperf

Persistent

As noted in the table above, further information for each of the workloads is given below. It is noted that for Login VSI testing, the following login and boot paradigm is used:

For single-server / single-host testing (typically carried out to determine the virtual desktop capacity of a specific physical server), users are logged in every 30 seconds.

Page 39: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

36 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

For multi-host / full solution testing, users are logged in over a period of 1-hour, to replicate the normal login storm in an enterprise environment.

All desktops are pre-booted in advance of logins commencing.

For all testing, all virtual desktops run an industry-standard anti-virus solution (currently McAfee VirusScan Enterprise) in order to fully replicate a customer environment.

6.3 XenDesktop on VRTX Test Results

6.3.1 vSphere Summary

This validation was performed for XenDesktop 7 delivering VDI desktops on 2 M620 servers in a VRTX Chassis, running VMware ESXi 5.1 Update 1 with 256 GB RAM (1600 MHz) and dual 3.0 GHz Ivy Bridge E5-2690v2 processors. Validation was performed using Dell Wyse Solution Engineering standard testing methodology using LoginVSI load generation tool for VDI benchmarking that simulates production user workloads. The management layer of the VDI stack was configured on the same hosts as the VDI desktops in order to minimize footprint and provide a self-contained solution, with the exception of basic networking services such as AD, DNS, and DHCP. To minimize the resources devoted to management, the Management roles were combined in the following configuration:

1 VM, 2 vCPU and 4 GB RAM for XenDesktop 7 Controller, Storefront, and Licensing services

1 VM, 2 vCPU and 8 GB RAM for VMware vCenter 5.1 Update 1 and Microsoft SQL Server 2012 SP1

XenDesktop 7 was configured to use MCS for virtual machine provisioning.

The VRTX chassis also provided the storage for the solution in its Shared PERC 8 enclosure with Tier 1 consisting of 20x 300GB 15K RPM SAS drives in RAID 10 and Tier 2 consisting of 5x 1.0 TB 7.2k RPM SAS drives in RAID 6.

XenDesktop 7 Delivery Groups have configuration settings that control how many virtual desktops are pre-started in the group before client connections arrive. By default, the Delivery Group uses a 10% buffer in pre-starting desktops, so that 10% of the total pool should be online and ready at any time. Administrators may change this percentage to suit their demand. In the case of this test, the prestart buffer was left at the default 10%, so that virtual desktops would be powering on while the test was running.

The test sequence follows three stages:

Ramp-up phase – user sessions login gradually in 30 seconds intervals until the maximum number of sessions has been reached

Steady-state phase – all sessions are logged in and running the planned workload continuously.

Ramp-down phase – sessions are permitted to logoff when their planned activities are completed.

For the purpose of data collection the steady-state and ramp-down phases are set to predefined intervals or 70 minutes and 30 minutes respectively. Data collection begins with the ramp-up phase and continues until the predefined interval for ramp-down is elapsed.

Page 40: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

37 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

In the following table, the Tier 1 IOPS data is measured during steady-state in order to better approximate real-world usage.

Tier 1 Read/Write Average Ratio

Tier 1 Read/Write Max Ratio

Tier 1 IOPS Average

Tier 1 IOPS Max

20/80 88/12 6.4 8.1

.

User Workload Max Users Per Physical Server

Standard User 125

Enhanced User 100

Professional User 75

6.3.2 vSphere 5.1 Update 1 Test Results

VMware ESXi 5.1 running on Ivy Bridge Processors provides a good user experience at a maximum 150 users per blade in the VRTX system (for reference). CPU Usage per blade remains below 80% during steady operations, and memory usage is within limits, especially considering that mgmt. VMs are also included. The IOPS on the Tier 1 storage were well within the limitations of the PERC adapter.

In this test the ramp-up phase was 150 minutes (300 sessions * 30 sec = 9000 sec = 150 min). The steady-state phase was defined as 70 minutes, and the ramp-down phase was defined as 30 minutes. This yields a total test duration of 250 minutes or 4 hours and 10 min.

Phase durations (test started at 13:40):

Ramp-up: 13:40 to 16:20

Steady-state: 16:20 to 17:30

Ramp-down: 17:30 to 18:00

Page 41: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

38 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

6.3.2.1 Standard User Workload (300 Users, 2 hosts)

These graphs show CPU, memory, local disk IOPS, network and VDI UX scatter plot results.

0

20

40

60

80

100

13

:41

13

:53

14

:05

14

:17

14

:29

14

:41

14

:53

15

:05

15

:17

15

:29

15

:41

15

:53

16

:05

16:

17

16

:29

16

:41

16

:53

17

:05

17

:17

17

:29

17:

41

17

:53

ESXi - 300 Basic Users - CPU

Usage vrtx-01 Usage vrtx-02

0

50

100

150

200

250

13

:41

13:

53

14

:05

14

:17

14

:29

14

:41

14

:53

15:

05

15

:17

15

:29

15

:41

15

:53

16

:05

16:

17

16

:29

16

:41

16

:53

17

:05

17

:17

17:

29

17

:41

17

:53

ESXi - 300 Basic Users - Memory

ActiveGB-1 GrantedGB-01

ActiveGB-2 GrantedGB-2

0

500

1000

1500

2000

2500

3000

3500

4000

4500

13:

41

13:

53

14:

05

14:

17

14:

29

14:

41

14:

53

15:

05

15:

17

15:

29

15:

41

15:

53

16:

05

16:

17

16:

29

16:

41

16:

53

17:

05

17:

17

17:

29

17:

41

17:

53ESXi - 300 Basic Users - Tier 1 IOPS

Total IOPS - vrtx-01 Total IOPS - vrtx-02

0

200

400

600

800

1000

1200

1400

1600

1800

13:

41

13:

53

14:

05

14:

17

14:

29

14:

41

14:

53

15:

05

15:

17

15:

29

15:

41

15:

53

16:

05

16:

17

16:

29

16:

41

16:

53

17:

05

17:

17

17:

29

17:

41

17:

53

ESXi - 300 Basic Users - Network

Total KBps - vrtx-01 Total KBps - vrtx-02

Page 42: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

39 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

6.3.3 Hyper-V Summary

This validation was performed for XenDesktop 7.1 delivering VDI desktops on 4 x M620 Compute blades running Windows 2012 R2 Hyper-V installed in a VRTX chassis with 25 x 2.5in drive slots for local Shared PERC storage. Each Compute Host was outfitted with 256 GB RAM (1600 MHz), dual 3.0 GHz Ivy Bridge E5-2690v2 processors and 2 Broadcom NICs on a mezzanine card. Both NICs were connected on each Compute Host and were teamed.

The VRTX Shared PERC storage was setup with 2 RAIDs,

Disks 0-19: 20x 300GB 15k RPM SAS drives in RAID 10

Disks 20-24: 5x 1.2 TB 10k RPM SAS drives in RAID 6

The following virtual disks were created from these RAIDs

Name Size RAID Purpose

Tier1 2788 GB Disks 0-19 Cluster Shared Volume

Quorum 1 GB Disks 20-24 Quorum Witness

Tier2PTD 2048 GB Disks 20-24 Pass-through Disk

Tier2 1302 GB Disks 20-24 Cluster Shared Volume

The provisioned VDI desktops and their master images were stored on the Tier1 CSV. The Management VMs were hosted on the same blades as the VDI desktops with the exception that the Management VM VHD storage was on the Tier2 CSV and used the Tier2PTD. The Management VMs were deployed using the following configurations:

CPU Memory Disk Purpose

2 8 GB 50 GB VHD XenDesktop 7.1 DDC, StoreFront and

Licensing

2 8 GB 50 GB VHD (OS) +

200 GB VHD (Library)

Microsoft System Center 2012 R2 Virtual

Machine Manager

4 8 GB 50 GB VHD (OS) +

150 GB VHD (SQL)

Microsoft SQL Server 2012 SP1

1 1 GB 40 GB VHD +

2048 GB Pass-through Disk

Windows File Share for User Profile

redirection

XenDesktop 7.1 was configured for MCS provisioning only in this scenario. The Desktop images were built using Windows 8.1 Enterprise and Microsoft Office 2010. For Standard and Enhanced workloads the 32-bit versions were used, but for Professional workloads the 64-bit versions were installed. The desktop VMs were configured using the recommended numbers of vCPU and dynamic RAM settings for Hyper-V for each User Type (Standard, Enhanced, and Professional).

Page 43: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

40 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

6.3.3.1 Windows Server 2012 R2 Hyper-V Test Results

Validation was performed using DVS standard testing methodology using LoginVSI 4 load generation tool for VDI benchmarking that simulates production user workloads. Stratusphere UX was not used in this validation since it is not compatible with the desktop OS, Windows 8.1 Enterprise.

Each test run was configured to allow for 30 minutes of steady state operations after the initial session launch and login period, so that all sessions would be logged in and active for a significant period of time so that statistics could be measured during steady state to simulate daily operations.

The following table summarizes the steady state test results for the various workloads and configurations. The charts in the sections below show more details about the max values for CPU, Memory, IOPS, and Network during each test run.

Hyper-

visor

Provis-

ioning

Workload Density Avg CPU %

per blade

Avg Mem

Usage GB per

blade

Avg Tier1

IOPS/User

Avg Net

KBps/User

Hyper-V MCS

Standard 600 69% 210 2.6 104

Enhanced 440 58% 194 6.7 142

Professional 340 48% 202 7.4 116

CPU Usage clearly was not at its maximum for the available hardware in any test run. The M620s in

this test were equipped with the same CPU and memory as our typical R720 compute hosts, which

have been tested at densities of 170 Standard users, 110 Enhanced users, and 90 Professional users

on one host with local storage. The densities tested in this case were at or below those densities per

host, with only 150 Standard, 110 Enhanced, and 85 Professional users per host, which would

predictably generate less load on the CPU, although that reservation was intended to be consumed

by the management VMs, which were not present on the hosts in earlier testing.

The primary reason for the divergence in CPU usage is that these hosts were clustered with CSV

storage and a shared PERC 8 controller. The combination of these factors offloads a substantial

amount of IO related CPU cycles to one host, leaving the others much less loaded. This effect is

evident in the CPU Usage graphs below, where we see that in each test one host had significantly

more CPU usage and network KBps than the others even though the number of VMs per host was

carefully managed to be on par before the test started. The management VMs too were distributed

evenly among the blades, and no single management VM had enough CPU load to account for the

difference.

Memory usage was also well below full utilization of 256 GB available on the hosts but was consistent

with expected usage of 1.2-1.4 GB/User for Standard, 1.6–1.8 GB/User for Enhanced, and 2.2-2.4

GB/user for Professional.

Network utilization was consistent with results from other tests using the Login VSI 4 test framework

on MCS-provisioned desktops.

The IOPS results were of particular interest because of the Shared PERC 8 being the central point of

all 4 hosts for Tier1 and Tier2 storage. The Average Tier1 IOPS/user were somewhat lower than in

previous test results using MCS on XenDesktop 7.1 on Hyper-V 2012 R2. Also of interest was the disk

IO latency on the shared disks. At these densities the read latency was well below 20 ms on all

shared volumes during steady state until the logoff period of the test. The Tier 2 latency in the 600

Standard users test did rise above 20 ms at points during the Launch and Login phase. Much of the

Page 44: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

41 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

Tier2 Latency appears to have been driven by activity on the file share management VM that was

attached to the Tier2 pass-through disk to host the user profiles. This was especially heavy during

the Logoff phase, but in steady state remained at very reasonable levels. The write latency was much

lower than the read latency in all cases.

6.3.3.2 Recommended Densities

Although the test results showed considerable additional CPU and memory capacity, other

limitations may prohibit higher user densities. In each test run we see in the charts section that one

host of the 4 had considerably higher CPU usage, even though the distribution of VMs was equal.

This may require that VMs be rebalanced so that the busier host’s CPU load will be equalized.

Disk Latency is a concern and should be kept under 20 ms to avoid performance degradation. In the

600 Standard user test, Tier1 Latency approached 20 ms during the launch phase, although it

remained below 10 ms during steady state. In the table below the Steady State numbers help to

determine a recommended density level.

Hyper-

visor

Prov

is-

ioni

ng

Work-

load

Tested

Dens-

ity

Avg

CPU %

per

blade

Avg Mem

GB per

blade

Tier1 Avg

Disk

Latency

ms

Tier1

Max Disk

Latency

ms

Tier2 Avg

Disk

Latency

ms

Tier2 Max

Disk

Latency

ms

Recom-

mended

Density

Hyper-

V MCS

Standard 600 69% 210 6.9 8.0 4.9 12.3 600

Enhanced 440 58% 194 3.2 12.3 17.3 99.7 430

Profession

al

340 48% 202 2.2 4.7 2.5 7.0 340

6.3.4 Microsoft Windows 2012 R2 Hyper-V Test Charts

6.3.4.1 MCS Standard User Workload (600 users)

These graphs show per-blade CPU, memory, and network results with total IOPS and Latency for Tier1, and Tier2, plus IOPS for Tier2PTD the pass-through disk.

Page 45: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

42 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

0

20

40

60

80

100

1201

9:0

8

19

:15

19

:22

19

:29

19

:36

19

:43

19

:50

19

:57

20

:04

20

:11

20

:18

20

:25

20

:32

20

:39

20

:46

20

:53

21

:00

21

:07

21

:14

21

:21

21

:28

21

:35

21

:42

21

:49

21

:56

22

:03

22

:10

22

:17

22

:24

CPU % per Blade

CPU% vrtx-01 CPU% vrtx-02 CPU% vrtx-03 CPU% vrtx-04

Launch/Login Steady Logoff

0

50

100

150

200

250

19

:08

19

:15

19

:22

19

:29

19

:36

19

:43

19

:50

19

:57

20

:04

20

:11

20

:18

20

:25

20

:32

20

:39

20

:46

20

:53

21

:00

21

:07

21

:14

21

:21

21

:28

21

:35

21

:42

21

:49

21

:56

22

:03

22

:10

22

:17

22

:24

Memory GB per Blade

Mem GB vrtx-01 Mem GB vrtx-02 Mem GB vrtx-03 Mem GB vrtx-04

0

10000

20000

30000

40000

50000

19

:08

19

:15

19

:22

19

:29

19

:36

19

:43

19

:50

19

:57

20

:04

20

:11

20

:18

20

:25

20

:32

20

:39

20

:46

20

:53

21

:00

21

:07

21

:14

21

:21

21

:28

21

:35

21

:42

21

:49

21

:56

22

:03

22

:10

22

:17

22

:24

Network KBps per Blade

Net KBps vrtx-01 Net KBps vrtx-02 Net KBps vrtx-03 Net KBps vrtx-04

Page 46: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

43 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

0100020003000400050006000

19

:08

19

:15

19

:22

19

:29

19

:36

19

:43

19

:50

19

:57

20

:04

20

:11

20

:18

20

:25

20

:32

20

:39

20

:46

20

:53

21

:00

21

:07

21

:14

21

:21

21

:28

21

:35

21

:42

21

:49

21

:56

22

:03

22

:10

22

:17

22

:24

Tier1 Total IOPS

Tier1 Total IOPS

Launch/Login Steady Logoff

0

20

40

60

80

100

120

140

19

:08

19

:14

19

:20

19

:26

19

:32

19

:38

19

:44

19

:50

19

:56

20

:02

20

:08

20

:14

20

:20

20

:26

20

:32

20

:38

20

:44

20

:50

20

:56

21

:02

21

:08

21

:14

21

:20

21

:26

21

:32

21

:38

21

:44

21

:50

21

:56

22

:02

22

:08

22

:14

22

:20

22

:26

Tier1 Latency (ms)

Tier1 Avg Read Latency Tier1 Avg Write Latency

0

50

100

150

19

:08

19

:14

19

:20

19

:26

19

:32

19

:38

19

:44

19

:50

19

:56

20

:02

20

:08

20

:14

20

:20

20

:26

20

:32

20

:38

20

:44

20

:50

20

:56

21

:02

21

:08

21

:14

21

:20

21

:26

21

:32

21

:38

21

:44

21

:50

21

:56

22

:02

22

:08

22

:14

22

:20

22

:26

Tier2 Total IOPS

Tier2 Total IOPS

Launch/Login Steady Logoff

Page 47: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

44 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

0

20

40

60

80

100

120

140

1601

9:0

8

19

:14

19

:20

19

:26

19

:32

19

:38

19

:44

19

:50

19

:56

20

:02

20

:08

20

:14

20

:20

20

:26

20

:32

20

:38

20

:44

20

:50

20

:56

21

:02

21

:08

21

:14

21

:20

21

:26

21

:32

21

:38

21

:44

21

:50

21

:56

22

:02

22

:08

22

:14

22

:20

22

:26

Tier2 latency (ms)

Tier2 Avg Read Latency Tier2 Avg Write Latency

0

200

400

600

800

19

:08

19

:14

19

:20

19

:26

19

:32

19

:38

19

:44

19

:50

19

:56

20

:02

20

:08

20

:14

20

:20

20

:26

20

:32

20

:38

20

:44

20

:50

20

:56

21

:02

21

:08

21

:14

21

:20

21

:26

21

:32

21

:38

21

:44

21

:50

21

:56

22

:02

22

:08

22

:14

22

:20

22

:26

Tier2 PTD Total IOPS

Tier2PTD Total IOPS

Launch/Login Steady Logoff

Page 48: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

45 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

6.3.4.2 MCS Enhanced User Workload (440 users)

These graphs show per-blade CPU, memory, and network results with total IOPS and Latency for Tier1, and Tier2, plus IOPS for Tier2PTD the pass-through disk.

0

20

40

60

80

100

13

:59

14

:04

14

:09

14

:14

14

:19

14

:24

14

:29

14

:34

14

:39

14

:44

14

:49

14

:54

14

:59

15

:04

15

:09

15

:14

15

:19

15

:24

15

:29

15

:34

15

:39

15

:44

15

:49

15

:54

15

:59

16

:04

16

:09

16

:14

16

:19

16

:24

16

:29

CPU % per Blade

CPU% vrtx-01 CPU% vrtx-02 CPU% vrtx-03 CPU% vrtx-04

Launch/Login Steady Logoff

0

50

100

150

200

250

13

:59

14

:04

14

:09

14

:14

14

:19

14

:24

14

:29

14

:34

14

:39

14

:44

14

:49

14

:54

14

:59

15

:04

15

:09

15

:14

15

:19

15

:24

15

:29

15

:34

15

:39

15

:44

15

:49

15

:54

15

:59

16

:04

16

:09

16

:14

16

:19

16

:24

16

:29

Memory GB per Blade

Mem GB vrtx-01 Mem GB vrtx-02 Mem GB vrtx-03 Mem GB vrtx-04

0

5000

10000

15000

20000

25000

30000

35000

13

:59

14

:04

14

:09

14

:14

14

:19

14

:24

14

:29

14

:34

14

:39

14

:44

14

:49

14

:54

14

:59

15

:04

15

:09

15

:14

15

:19

15

:24

15

:29

15

:34

15

:39

15

:44

15

:49

15

:54

15

:59

16

:04

16

:09

16

:14

16

:19

16

:24

16

:29

Network KBps per Blade

Net KBps vrtx-01 Net KBps vrtx-02 Net KBps vrtx-03 Net KBps vrtx-04

Page 49: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

46 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

0

1000

2000

3000

4000

5000

6000

70001

3:5

9

14

:04

14

:09

14

:14

14

:19

14

:24

14

:29

14

:34

14

:39

14

:44

14

:49

14

:54

14

:59

15

:04

15

:09

15

:14

15

:19

15

:24

15

:29

15

:34

15

:39

15

:44

15

:49

15

:54

15

:59

16

:04

16

:09

16

:14

16

:19

16

:24

16

:29

Tier1 Total IOPS

Tier1 Total IOPS

Launch/Login Steady Logoff

0

5

10

15

13

:59

14

:04

14

:09

14

:14

14

:19

14

:24

14

:29

14

:34

14

:39

14

:44

14

:49

14

:54

14

:59

15

:04

15

:09

15

:14

15

:19

15

:24

15

:29

15

:34

15

:39

15

:44

15

:49

15

:54

15

:59

16

:04

16

:09

16

:14

16

:19

16

:24

16

:29

Tier1 Latency (ms)

Tier1 Avg Read Latency Tier1 Avg Write Latency

0

20

40

60

80

100

120

13

:59

14

:04

14

:09

14

:14

14

:19

14

:24

14

:29

14

:34

14

:39

14

:44

14

:49

14

:54

14

:59

15

:04

15

:09

15

:14

15

:19

15

:24

15

:29

15

:34

15

:39

15

:44

15

:49

15

:54

15

:59

16

:04

16

:09

16

:14

16

:19

16

:24

16

:29

Tier2 Total IOPS

Tier2 Total IOPS

Launch/Login Steady Logoff

Page 50: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

47 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

0

100

200

300

400

500

600

7001

3:5

9

14

:04

14

:09

14

:14

14

:19

14

:24

14

:29

14

:34

14

:39

14

:44

14

:49

14

:54

14

:59

15

:04

15

:09

15

:14

15

:19

15

:24

15

:29

15

:34

15

:39

15

:44

15

:49

15

:54

15

:59

16

:04

16

:09

16

:14

16

:19

16

:24

16

:29

Tier2 latency (ms)

Tier2 Avg Read Latency Tier2 Avg Write Latency

0

500

1000

1500

2000

13

:59

14

:04

14

:09

14

:14

14

:19

14

:24

14

:29

14

:34

14

:39

14

:44

14

:49

14

:54

14

:59

15

:04

15

:09

15

:14

15

:19

15

:24

15

:29

15

:34

15

:39

15

:44

15

:49

15

:54

15

:59

16

:04

16

:09

16

:14

16

:19

16

:24

16

:29

Tier2 PTD Total IOPS

Tier2PTD Total IOPS

Page 51: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

48 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

6.3.4.3 MCS Professional User Workload (340 users)

These graphs show per-blade CPU, memory, and network results with total IOPS and Latency for Tier1, and Tier2, plus IOPS for Tier2PTD the pass-through disk.

0

20

40

60

80

100

21

:53

21

:57

22

:01

22

:05

22

:09

22

:13

22

:17

22

:21

22

:25

22

:29

22

:33

22

:37

22

:41

22

:45

22

:49

22

:53

22

:57

23

:01

23

:05

23

:09

23

:13

23

:17

23

:21

23

:25

23

:29

23

:33

23

:37

23

:41

23

:45

23

:49

23

:53

23

:57

0:0

1

CPU % per Blade

CPU% vrtx-01 CPU% vrtx-02 CPU% vrtx-03 CPU% vrtx-04

Launch/Login Steady Logoff

0

50

100

150

200

250

21

:53

21

:57

22

:01

22

:05

22

:09

22

:13

22

:17

22

:21

22

:25

22

:29

22

:33

22

:37

22

:41

22

:45

22

:49

22

:53

22

:57

23

:01

23

:05

23

:09

23

:13

23

:17

23

:21

23

:25

23

:29

23

:33

23

:37

23

:41

23

:45

23

:49

23

:53

23

:57

0:0

1

Memory GB per Blade

Mem GB vrtx-01 Mem GB vrtx-02 Mem GB vrtx-03 Mem GB vrtx-04

0

5000

10000

15000

20000

25000

30000

21

:53

21

:57

22

:01

22

:05

22

:09

22

:13

22

:17

22

:21

22

:25

22

:29

22

:33

22

:37

22

:41

22

:45

22

:49

22

:53

22

:57

23

:01

23

:05

23

:09

23

:13

23

:17

23

:21

23

:25

23

:29

23

:33

23

:37

23

:41

23

:45

23

:49

23

:53

23

:57

0:0

1Network KBps per Blade

Net KBps vrtx-01 Net KBps vrtx-02 Net KBps vrtx-03 Net KBps vrtx-04

Page 52: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

49 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

0

2000

4000

6000

8000

100002

1:5

3

21

:57

22

:01

22

:05

22

:09

22

:13

22

:17

22

:21

22

:25

22

:29

22

:33

22

:37

22

:41

22

:45

22

:49

22

:53

22

:57

23

:01

23

:05

23

:09

23

:13

23

:17

23

:21

23

:25

23

:29

23

:33

23

:37

23

:41

23

:45

23

:49

23

:53

23

:57

0:0

1

Tier 1 Total IOPS

Tier1 Total IOPS

Launch/Login Steady Logoff

0

10

20

30

21

:53

21

:57

22

:01

22

:05

22

:09

22

:13

22

:17

22

:21

22

:25

22

:29

22

:33

22

:37

22

:41

22

:45

22

:49

22

:53

22

:57

23

:01

23

:05

23

:09

23

:13

23

:17

23

:21

23

:25

23

:29

23

:33

23

:37

23

:41

23

:45

23

:49

23

:53

23

:57

0:0

1

Tier1 Latency (ms)

Tier1 Avg Read Latency Tier1 Avg Write Latency

0

20

40

60

80

100

21

:53

21

:57

22

:01

22

:05

22

:09

22

:13

22

:17

22

:21

22

:25

22

:29

22

:33

22

:37

22

:41

22

:45

22

:49

22

:53

22

:57

23

:01

23

:05

23

:09

23

:13

23

:17

23

:21

23

:25

23

:29

23

:33

23

:37

23

:41

23

:45

23

:49

23

:53

23

:57

0:0

1

Tier2 Total IOPS

Tier2 Total IOPS

0

200

400

600

800

1000

21

:53

21

:57

22

:01

22

:05

22

:09

22

:13

22

:17

22

:21

22

:25

22

:29

22

:33

22

:37

22

:41

22

:45

22

:49

22

:53

22

:57

23

:01

23

:05

23

:09

23

:13

23

:17

23

:21

23

:25

23

:29

23

:33

23

:37

23

:41

23

:45

23

:49

23

:53

23

:57

0:0

1

Tier2 PTD Total IOPS

Tier2PTD Total IOPS

Page 53: Dell Wyse Datacenter Reference Architecture for Dell · PDF file1 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop 1 Introduction 1.1 Purpose of

50 Dell Wyse Datacenter – Reference Architecture for Dell VRTX and Citrix XenDesktop

About the Authors

Peter Fine is the Sr. Principal Solutions Architect for Citrix-based solutions at Dell. Peter has extensive experience and expertise on the broader Microsoft, Citrix and VMware solutions software stacks as well as in enterprise virtualization, storage, networking and enterprise data center design. Rick Biedler is the Solutions Development Manager for Citrix solutions at Dell, managing the development and delivery of Enterprise class Desktop virtualization solutions based on Dell Data center components and core virtualization platforms. Geoff Dillon is a Sr. Solutions Engineer in the Dell Wyse Solutions Engineering Group at Dell with deep Citrix experience and validation expertise of Dell’s Dell Wyse Datacenter VDI solutions.