emc vspex end-user computing: citrix xendesktop 7.5 and
TRANSCRIPT
DESIGN GUIDE
EMC VSPEX END-USER COMPUTING Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Enabled by EMC Isilon, EMC VNX, and EMC Data Protection
EMC VSPEX
Abstract
This Design Guide describes how to design an EMC® VSPEX® End-User Computing solution for Citrix XenDesktop 7.6. EMC XtremIOTM, EMC Isilon®, EMC VNX®, and VMware vSphere provide the storage and virtualization platforms.
July 2015
2 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Copyright © 2015 EMC Corporation. All rights reserved. Published in the USA.
Published July 2015
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United States and other countries. All other trademarks used herein are the property of their respective owners.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.
EMC VSPEX End-User Computing Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Enabled by EMC VNX, EMC Isilon, and EMC Data Protection Design Guide
Part Number H14074.1
Contents
3 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Contents
Chapter 1 Introduction 7
Purpose of this guide .................................................................................................. 8
Business value ........................................................................................................... 8
Scope ......................................................................................................................... 9
Audience .................................................................................................................... 9
Terminology.............................................................................................................. 10
Chapter 2 Before You Start 11
Deployment workflow ............................................................................................... 12
Essential reading ...................................................................................................... 12
Chapter 3 Solution Overview 13
Overview .................................................................................................................. 14
VSPEX Proven Infrastructures ................................................................................... 14
Solution architecture ................................................................................................ 15
High-level architecture ......................................................................................... 15
Logical architecture ............................................................................................. 17
Key components ....................................................................................................... 18
Desktop virtualization broker ................................................................................... 19
Overview .............................................................................................................. 19
Citrix .................................................................................................................... 19
XenDesktop 7.6 ................................................................................................... 19
Machine Creation Services ................................................................................... 20
Citrix Provisioning Services .................................................................................. 21
Citrix Personal vDisk ............................................................................................ 21
Citrix Profile Management .................................................................................... 21
Virtualization layer ................................................................................................... 21
VMware vSphere .................................................................................................. 21
VMware vCenter Server ........................................................................................ 22
VMware vSphere High Availability ........................................................................ 22
VMware vShield Endpoint .................................................................................... 22
Compute layer .......................................................................................................... 22
Network layer ........................................................................................................... 22
Storage layer ............................................................................................................ 23
EMC XtremIO ........................................................................................................ 23
Contents
4 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
EMC Isilon............................................................................................................ 25
EMC VNX .............................................................................................................. 28
Virtualization management .................................................................................. 31
Data protection layer ................................................................................................ 33
Security layer ............................................................................................................ 33
Citrix ShareFile StorageZones solution ..................................................................... 33
Chapter 4 Sizing the Solution 35
Overview .................................................................................................................. 36
Reference workload .................................................................................................. 36
Login VSI ............................................................................................................. 37
VSPEX Private Cloud requirements............................................................................ 37
Private cloud storage layout ................................................................................. 38
VSPEX XtremIO array configurations ......................................................................... 38
Validated XtremIO configurations ....................................................................... 38
XtremIO storage layout ........................................................................................ 38
Expanding existing VSPEX end-user computing environments ............................. 39
Isilon configuration .................................................................................................. 39
VNX array configurations .......................................................................................... 40
User data storage VNX building block .................................................................. 40
EMC FAST VP ........................................................................................................ 40
VNX shared file systems....................................................................................... 40
Choosing the appropriate reference architecture ...................................................... 41
Using the Customer Sizing Worksheet .................................................................. 41
Selecting a reference architecture ........................................................................ 43
Fine tuning hardware resources ........................................................................... 44
Summary ............................................................................................................. 45
Chapter 5 Solution Design Considerations and Best Practices 46
Overview .................................................................................................................. 47
Server design considerations ................................................................................... 47
Server best practices ........................................................................................... 48
Validated server hardware ................................................................................... 49
vSphere memory virtualization ............................................................................ 49
Memory configuration guidelines ......................................................................... 51
Network design considerations ................................................................................ 53
Validated network hardware ................................................................................ 53
Network configuration guidelines ........................................................................ 54
Storage design considerations ................................................................................. 58
Overview .............................................................................................................. 58
Contents
5 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Validated storage hardware and configuration ..................................................... 58
vSphere storage virtualization ............................................................................. 59
High availability and failover .................................................................................... 59
Virtualization layer ............................................................................................... 59
Compute layer ..................................................................................................... 60
Network layer ....................................................................................................... 60
Storage layer ....................................................................................................... 61
Validation test profile ............................................................................................... 62
Profile characteristics .......................................................................................... 62
EMC Data Protection configuration guidelines .......................................................... 63
Data protection profile characteristics ................................................................. 63
Data protection layout ......................................................................................... 64
Chapter 6 Reference Documentation 65
EMC documentation ................................................................................................. 66
Other documentation ............................................................................................... 66
Appendix A Customer Sizing Worksheet 68
Customer Sizing Worksheet for end-user computing ................................................. 69
Figures Figure 1. VSPEX Proven Infrastructures .............................................................. 15
Figure 2. Architecture of the validated solution .................................................. 16
Figure 3. Logical architecture for both block and file storage .............................. 17
Figure 4. XenDesktop 7.6 architecture components ........................................... 19
Figure 5. Isilon cluster components ................................................................... 26
Figure 6. EMC Isilon OneFS operating system functionality................................. 26
Figure 7. Isilon node classes .............................................................................. 28
Figure 8. New Unisphere Management Suite ...................................................... 30
Figure 9. Compute layer flexibility ...................................................................... 47
Figure 10. Hypervisor memory consumption ........................................................ 50
Figure 11. Virtual machine memory settings ........................................................ 52
Figure 12. Highly-available XtremIO FC network design example .......................... 55
Figure 13. Highly-available VNX Ethernet network design example ....................... 56
Figure 14. Required networks .............................................................................. 57
Figure 15. VMware virtual disk types .................................................................... 59
Figure 16. High availability at the virtualization layer ........................................... 60
Figure 17. Redundant power supplies .................................................................. 60
Figure 18. VNX Ethernet network layer high availability ........................................ 61
Contents
6 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Figure 19. XtremIO series high availability ........................................................... 61
Figure 20. VNX series high availability ................................................................. 62
Figure 21. Printable customer sizing worksheet ................................................... 70
Tables Table 1. Terminology......................................................................................... 10
Table 2. Deployment workflow .......................................................................... 12
Table 3. Solution components .......................................................................... 18
Table 4. VSPEX end-user computing: Design process ........................................ 36
Table 5. Reference virtual desktop characteristics ............................................ 36
Table 6. Infrastructure server minimum requirements ....................................... 37
Table 7. XtremIO storage layout ........................................................................ 39
Table 8. User data resource requirement on Isilon ............................................ 39
Table 9. User data resource requirement on VNX .............................................. 40
Table 10. Example Customer Sizing Worksheet ................................................... 41
Table 11. Reference virtual desktop resources .................................................... 43
Table 12. Server resource component totals ....................................................... 44
Table 13. Server hardware .................................................................................. 49
Table 14. Minimum switching capacity ............................................................... 53
Table 15. Storage hardware ................................................................................ 58
Table 16. Validated environment profile ............................................................. 62
Table 17. Data protection profile characteristics ................................................. 63
Table 18. Customer sizing worksheet .................................................................. 69
Chapter 1: Introduction
7 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Chapter 1 Introduction
This chapter presents the following topics:
Purpose of this guide ................................................................................................. 8
Business value ........................................................................................................... 8
Scope ......................................................................................................................... 9
Audience .................................................................................................................... 9
Terminology ............................................................................................................. 10
Chapter 1: Introduction
8 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Purpose of this guide
The EMC® VSPEX® End-User Computing Proven Infrastructure provides the customer with a modern system capable of hosting a large number of virtual desktops at a consistent performance level. This VSPEX End-User Computing solution for Citrix XenDesktop 7.6 runs on a VMware vSphere virtualization layer backed by the highly available EMC XtremIO™ family, which provides the storage. In this solution, the desktop virtualization infrastructure components are layered on a VSPEX Private Cloud for VMware vSphere Proven Infrastructure, while the desktops are hosted on dedicated resources.
The compute and network components, which are defined by the VSPEX partners, are designed to be redundant and sufficiently powerful to handle the processing and data needs of a large virtual desktop environment. EMC XtremIO storage systems provide storage for virtual desktops, EMC Isilon® or VNX systems provide storage for user data, EMC Avamar® data protection solutions provide data protection for Citrix XenDesktop data, and RSA SecurID provides optional secure user authentication functionality.
This VSPEX End-User-Computing solution is validated for up to 3,500 virtual desktops. These validated configurations are based on a reference desktop workload and forms the basis for creating cost-effective, custom solutions for individual customers.
XtremIO supports scale-out clusters of up to six X-Bricks. Each additional X-Brick increases performance and virtual desktop capacity linearly. XtremIO X-Bricks have been validated to support a higher number of desktops and the VSPEX validated numbers are particular to the communicated solution only.
An end-user computing or virtual desktop infrastructure is a complex system offering. This Design Guide describes how to design an end-user computing solution according to best practices for Citrix XenDesktop for VMware vSphere enabled by EMC XtremIO, EMC Isilon, EMC VNX, and EMC Data Protection.
Business value
Employees are more mobile than ever, and they expect access to business-critical data and applications from any location and from any device. They want the flexibility to bring their own device to work, which means IT departments are increasingly investigating and supporting Bring Your Own Device (BYOD) initiatives. This adds layers of complexity to safeguarding sensitive information. Deploying a virtual desktop project is one way to do this.
Implementing large-scale virtual desktop environments presents many challenges, however. Administrators must rapidly roll out persistent or non-persistent desktops for all users – task workers, knowledge workers, and power users – while offering an outstanding user experience that outperforms physical desktops.
In addition to performance, a virtual desktop solution must be simple to deploy, manage, and scale, with substantial cost savings over physical desktops. Storage is also a critical component of an effective virtual desktop solution. EMC VSPEX Proven
Chapter 1: Introduction
9 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Infrastructures are designed to help you address the most serious of IT challenges by creating solutions that are simple, efficient, and flexible and designed to take advantage of the many possibilities that XtremIO’s flash technology offers.
The business benefits of the VSPEX End-User Computing solution for Citrix XenDesktop include:
End-to-end virtualization solution to use the capabilities of the unified infrastructure components
Efficient virtualization for varied customer user cases of up to 3, 500 virtual desktops for an X-Brick and up to 1, 750 virtual desktops for a Starter X-Brick
Reliable, flexible, and scalable reference architectures
Scope
This Design Guide describes how to plan a simple, effective, and flexible VSPEX End-User Computing solution for Citrix XenDesktop 7.6. It provides a deployment example of virtual desktop storage on EMC XtremIO and user data storage on a Isilon system or VNX storage array.
The desktop virtualization infrastructure components of the solution are layered on a VSPEX Private Cloud for VMware vSphere Proven Infrastructure. This guide illustrates how to size XenDesktop on the VSPEX infrastructure, allocate resources following best practice, and use all the benefits that VSPEX offers.
The optional RSA SecurID secure user authentication solution for XenDesktop is described in a separate document, Securing EMC VSPEX End-User Computing with RSA SecurID: Citrix XenDesktop 7 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Design Guide.
Audience
This guide is intended for internal EMC personnel and qualified EMC VSPEX Partners. The guide assumes that VSPEX partners who intend to deploy this VSPEX Proven Infrastructure for Citrix XenDesktop have the necessary training and background to install and configure an end-user computing solution based on Citrix XenDesktop with VMware vSphere as the hypervisor, XtremIO, Isilon, and VNX series storage systems, and associated infrastructure.
Readers should also be familiar with the infrastructure and database security policies of the customer installation.
This guide provides external references where applicable. EMC recommends that partners implementing this solution are familiar with these documents. For details, see Essential reading and Chapter 6: Reference Documentation.
Chapter 1: Introduction
10 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Terminology
Table 1 lists the terminology used in this guide.
Table 1. Terminology
Term Definition
Data deduplication A feature of the XtremIO array that reduces physical storage utilization by eliminating redundant blocks of data.
Reference architecture
The validated architecture that supports this VSPEX end-user-computing solution at four particular points of scale— up to 3, 500 virtual desktops for an X-Brick and up to 1, 750 virtual desktops for a Starter X-Brick.
Reference workload For VSPEX end-user computing solutions, the reference workload is defined as a single virtual desktop—the reference virtual desktop—with the workload characteristics indicated in Table 5. By comparing the customer’s actual usage to this reference workload, you can determine which reference architecture to choose as the basis for the customer’s VSPEX deployment.
Refer to Reference workload for details.
Storage Processor (SP)
The compute component of the storage array. SPs are used for all aspects of data moved into, out of, and between arrays
Storage Controller (SC)
The computer component of the XtremIO storage array. SCs are used for all aspects of data moved into, out of, and between XtremIO arrays.
Virtual Desktop Infrastructure (VDI)
Decouples the desktop from the physical machine. In a VDI environment, the desktop operating system (OS) and applications reside inside a virtual machine running on a host computer, with data residing on shared storage. Users access their virtual desktop from any computer or mobile device over a private network or internet connection.
XtremIO Management Server (XMS)
Used to manage the XtremIO array and deployed as a virtual machine using an Open Virtualization Alliance (OVA) package.
XtremIO Starter X-Brick
A specialized configuration of the EMC XtremIO All-Flash Array that includes 13 SSD drives for this solution.
XtremIO X-Brick A specialized configuration of the EMC XtremIO All-Flash Array that includes 25 SSD drives for this solution.
Chapter 2: Before You Start
11 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Chapter 2 Before You Start
This chapter presents the following topics:
Deployment workflow .............................................................................................. 12
Essential reading ..................................................................................................... 12
Chapter 2: Before You Start
12 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Deployment workflow
To design and implement your end-user computing solution, refer to the process flow in Table 2.
Table 2. Deployment workflow
Step Action
1 Use the Customer Sizing Worksheet to collect customer requirements. Refer to Appendix A for more information.
2 Use the EMC VSPEX Sizing Tool to determine the recommended VSPEX reference architecture for your end-user computing solution, based on the customer requirements collected in Step 1.
For more information about the Sizing Tool, refer to the EMC VSPEX Sizing Tool portal.
Note: If the Sizing Tool is not available, you can manually size the application using the guidelines in Chapter 4.
3 Use this Design Guide to determine the final design for your VSPEX solution.
Note: Ensure that all resource requirements are considered and not just the requirements for end-user computing.
4 Select and order the right VSPEX reference architecture and Proven Infrastructure. Refer to the VSPEX Proven Infrastructure Guide in Essential reading for guidance on selecting a Private Cloud Proven Infrastructure.
5 Deploy and test your VSPEX solution. Refer to the VSPEX Implementation Guide in Essential reading for guidance.
Note: The solution was validated by EMC using the Login VSI tool, as described in Chapter 4. Refer to http://www.loginvsi.com for more information.
Essential reading
EMC recommends that you read the following documents, available from the VSPEX space in the EMC Community Network or from EMC.com or the VSPEX Proven Infrastructure partner portal.
EMC VSPEX End User Computing
EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Implementation Guide
EMC VSPEX Private Cloud: VMware vSphere 5.5 for up to 1,000 Virtual Machines Proven Infrastructure Guide
Securing EMC VSPEX End-User Computing with RSA SecurID: Citrix XenDesktop 7 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Design Guide
Chapter 3: Solution Overview
13 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Chapter 3 Solution Overview
This chapter presents the following topics:
Overview .................................................................................................................. 14
VSPEX Proven Infrastructures................................................................................... 14
Solution architecture ............................................................................................... 15
Key components ...................................................................................................... 18
Desktop virtualization broker ................................................................................... 19
Virtualization layer ................................................................................................... 21
Compute layer .......................................................................................................... 22
Network layer ........................................................................................................... 22
Storage layer ........................................................................................................... 23
Data protection layer................................................................................................ 33
Security layer ........................................................................................................... 33
Citrix ShareFile StorageZones solution .................................................................... 33
Chapter 3: Solution Overview
14 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Overview
This chapter provides an overview of the VSPEX End-User Computing for Citrix XenDesktop on VMware vSphere solution and the key technologies used in the solution. The solution has been designed and proven by EMC to provide the desktop virtualization, server, network, storage, and data protection resources to support reference architectures of up to 3,500 virtual desktops for an X-Brick and up to 1,750 virtual desktops for a Starter X-Brick.
Although the desktop virtualization infrastructure components of the solution shown in Figure 3 are designed to be layered on a VSPEX Private Cloud solution, the reference architectures do not include configuration details for the underlying Proven Infrastructure. Refer to the VSPEX Proven Infrastructure Guide in Essential reading for information on configuring the required infrastructure components.
VSPEX Proven Infrastructures
EMC has joined forces with the industry-leading providers of IT infrastructure to create a complete virtualization solution that accelerates the deployment of the private cloud and Citrix XenDesktop virtual desktops. VSPEX enables customers to accelerate their IT transformation with faster deployment, greater simplicity and choice, higher efficiency, and lower risk, compared to the challenges and complexity of building an IT infrastructure themselves.
VSPEX validation by EMC ensures predictable performance and enables customers to select technology that uses their existing or newly acquired IT infrastructure while eliminating planning, sizing, and configuration burdens. VSPEX provides a virtual infrastructure for customers who want the simplicity characteristic of truly converged infrastructures, with more choice in individual stack components.
VSPEX Proven Infrastructures, as shown in Figure 1, are modular, virtualized infrastructures validated by EMC and delivered by EMC VSPEX partners. They include virtualization, server, network, storage, and data protection layers. Partners can choose the virtualization, server, and network technologies that best fit a customer’s environment, while the highly available EMC XtremIO, Isilon, and VNX family of storage systems and EMC Data Protection technologies provide the storage and data protection layers.
Chapter 3: Solution Overview
15 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Figure 1. VSPEX Proven Infrastructures
Solution architecture
The EMC VSPEX End-User Computing for Citrix XenDesktop solution provides a complete system architecture capable of supporting up to 3, 500 virtual desktops for an X-Brick and up to 1,750 virtual desktops for a Starter X-Brick. The solution supports block storage on XtremIO for virtual desktops and optional file storage on Isilon or VNX for user data.
High-level architecture
Chapter 3: Solution Overview
16 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Figure 2 shows the high-level architecture of the validated solution.
Figure 2. Architecture of the validated solution
The solution uses EMC XtremIO, Isilon, VNX, and VMware vSphere to provide the storage and virtualization platforms for a Citrix XenDesktop environment of Microsoft Windows 7 virtual desktops provisioned by Citrix XenDesktop Machine Creation Services (MCS) or Citrix Provisioning Services (PVS).
For the solution, we1 deployed the XtremIO array in multiple X-Brick configurations to support up to 3,500 virtual desktops. Two different XtremIO X-Brick types were tested, a Starter X-Brick capable of hosting up to 1,750 virtual desktops, and an X-Brick capable of hosting up to 3,500 virtual desktops. We also deployed Isilon and VNX arrays for hosting user data.
The highly available EMC XtremIO array provides the storage for the desktop virtualization components. The infrastructure services for the solution, as shown in Figure 3, can be provided by existing infrastructure at the customer site, by the VSPEX Private Cloud, or by deploying them as dedicated resources as part of the solution. The virtual desktops, as shown in Figure 3, require dedicated end-user computing resources and are not intended to be layered on a VSPEX Private Cloud.
1 In this guide, "we" refers to the EMC Solutions engineering team that validated the solution.
Chapter 3: Solution Overview
17 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Planning and designing the storage infrastructure for a Citrix XenDesktop environment is critical because the shared storage must be able to absorb large bursts of I/O that occur during a day. These bursts can lead to periods of erratic and unpredictable virtual desktop performance. Users can adapt to slow performance, but unpredictable performance frustrates users and reduces efficiency.
To provide predictable performance for end-user computing solutions, the storage system must be able to handle the peak I/O load from the clients while keeping response time to a minimum. This solution uses the EMC XtremIO array to provide the sub-millisecond response times the clients require, while the real-time, inline deduplication and inline compression features of the platform reduce the amount of physical storage needed.
EMC Data Protection solutions enable user data protection and end-user recoverability. This Citrix XenDesktop solution uses EMC Avamar and its desktop client to achieve this.
The EMC VSPEX End-User Computing for Citrix XenDesktop solution supports block storage on XtremIO for the virtual desktops. Figure 3 shows the logical architecture of the solution.
Figure 3. Logical architecture for both block and file storage
This solution uses two networks:
One 8 Gb Fibre Channel network or 10 GbE iSCSI for carrying virtual desktop and virtual server OS data
One 10 Gb Ethernet network for carrying all other traffic.
Note: The solution also supports 1 Gb Ethernet if the bandwidth requirements are met.
EMC VNX
…
Virtual desktop # 1
Virtual desktop # n
…
VMware vSphere infrastructure cluster
Network
VSPEX vSphere virtual servers
vCenterServer
XenDesktop Controllers 2
and 3
XenDesktop Controller 1
Active Directory DNS / DHCP
SQLServer
VSPEX vSphere virtual desktops
10 GbE IP network
8 Gb FC network
EMC Avamar
EMC XtremIO
Desktop users(ICA Clients)
Virtual
Desktop #1
Virtual
Desktop #1
PVS Servers 1 and 2
PVS Servers 3 and 4
VMware vSphere desktop cluster(s)
EMC Isilon
Logical architecture
Chapter 3: Solution Overview
18 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Key components
This section provides an overview of the key technologies used in this solution, as outlined in Table 3.
Table 3. Solution components
Component Description
Desktop virtualization broker
Manages the provisioning, allocation, maintenance, and eventual removal of the virtual desktop images that are provided to users of the system. This software is critical to enable on-demand creation of desktop images, allow maintenance to the image without affecting user productivity, and prevent the environment from growing in an unconstrained way.
The desktop broker in this solution is Citrix XenDesktop 7.6.
Virtualization layer
Allows the physical implementation of resources to be decoupled from the applications that use them. In other words, the application’s view of the resources available is no longer directly tied to the hardware. This enables many key features in the end-user computing concept.
This solution uses VMware vSphere for the virtualization layer.
Compute layer Provides memory and processing resources for the virtualization layer software as well as for the applications running in the infrastructure. The VSPEX program defines the minimum amount of compute layer resources required but allows the customer to implement the requirements using any server hardware that meets these requirements.
Network layer Connects the users of the environment to the resources they need and connects the storage layer to the compute layer. The VSPEX program defines the minimum number of network ports required for the solution and provides general guidance on network architecture, but allows the customer to implement the requirements using any network hardware that meets these requirements.
Storage layer A critical resource for the implementation of the end-user computing environment, the storage layer must be able to absorb large bursts of activity as they occur without unduly affecting the user experience.
This solution uses EMC XtremIO, Isilon, and VNX series arrays to efficiently handle this workload.
Data protection An optional solution component that provides data protection in the event that data in the primary system is deleted, damaged, or otherwise unusable.
This solution uses EMC Avamar for data protection.
Security layer An optional solution component that provides consumers with additional options to control access to the environment and ensure that only authorized users are permitted to use the system.
This solution uses RSA SecurID to provide secure user authentication.
Citrix ShareFile StorageZones solution
Optional support for Citrix ShareFile StorageZones deployments
Chapter 3: Solution Overview
19 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Desktop virtualization broker
Desktop virtualization encapsulates and hosts desktop services on centralized computing resources at remote data centers. This enables end users to connect to their virtual desktops from different types of devices across a network connection. Devices can include desktops, laptops, thin clients, zero clients, smartphones, and tablets.
In this solution, we used Citrix XenDesktop to provision, manage, broker, and monitor the desktop virtualization environment.
XenDesktop is the desktop virtualization solution from Citrix that enables virtual desktops to run on the vSphere virtualization environment. Citrix XenDesktop 7.6 integrates Citrix XenApp application delivery technologies and XenDesktop desktop virtualization technologies into a single architecture and management experience. This new architecture unifies both management and delivery components to enable a scalable, simple, efficient, and manageable solution for delivering Windows applications and desktops as secure mobile services to users anywhere on any device.
Figure 4 shows the XenDesktop 7.6 architecture components.
Figure 4. XenDesktop 7.6 architecture components
Overview
Citrix XenDesktop 7.6
Chapter 3: Solution Overview
20 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
The XenDesktop 7.6 architecture includes the following components:
Citrix Director—A web-based tool that enables IT support and help desk teams to monitor an environment, troubleshoot issues before they become system-critical, and perform support tasks for end users.
Citrix Receiver—Installed on user devices, Citrix Receiver provides users with quick, secure, self-service access to documents, applications, and desktops from any of the user’s devices including smartphones, tablets, and PCs. Receiver provides on-demand access to Windows, web, and software as a service (SaaS) applications.
Citrix StoreFront—Provides authentication and resource delivery services for Citrix Receiver. It enables centralized control of resources and provides users with on-demand, self-service access to their desktops and applications.
Citrix Studio—The management console that enables you to configure and manage your deployment, eliminating the need for separate consoles for managing delivery of applications and desktops. Studio provides various wizards to guide you through the process of setting up your environment, creating your workloads to host applications and desktops, and assigning applications and desktops to users.
Delivery Controller—Installed on servers in the data center, Delivery Controller consists of services that communicate with the hypervisor to distribute applications and desktops, authenticate and manage user access, and broker connections between users and their virtual desktops and applications. Delivery Controller manages the state of the desktops, starting and stopping them based on demand and administrative configuration. In some editions, the controller enables you to install profile management to manage user personalization settings in virtualized or physical Windows environments.
License Server—Assigns user or device license to the XenDesktop environment. License server can be installed along with other Citrix XenDesktop components or on a separate virtual/physical machine.
Virtual Delivery Agent (VDA)—Installed on server or workstation operating systems, the VDA enables connections for desktops and applications. For remote PC access, install the VDA on the office PC.
Server OS machines—Virtual machines or physical machines, based on the Windows Server operating system, used for delivering applications or hosted shared desktops (HSDs) to users.
Desktop OS machines—Virtual machines or physical machines, based on the Windows Desktop operating system, used for delivering personalized desktops to users, or applications from desktop operating systems.
Remote PC Access—Enables users to access resources on their office PCs remotely, from any device running Citrix Receiver.
Machine Creation Services (MCS) is a provisioning mechanism that is integrated with the XenDesktop management interface, Citrix Studio, to provision, manage, and decommission desktops throughout the desktop lifecycle from a centralized point of management.
Machine Creation Services
Chapter 3: Solution Overview
21 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
MCS enables several types of desktop experience to be managed within a catalog in Citrix Studio. The end user logs into the same desktop for a static desktop experience and logs into a new desktop for a random desktop experience. Desktop customization is persistent for static desktops that use the Personal vDisk (PvDisk or PvD) feature or desktop local hard drive to save changes. The random desktop discards changes and refreshes the desktop when the user logs off.
Citrix Provisioning Services (PVS) takes a different approach from traditional desktop imaging solutions by fundamentally changing the relationship between hardware and the software that runs on it. By streaming a single shared disk image (vDisk) instead of copying images to individual machines, PVS enables organizations to reduce the number of disk images that they manage. As the number of machines continues to grow, PVS provides the efficiency of centralized management with the benefits of distributed processing.
Because machines stream disk data dynamically in real time from a single shared image, machine image consistency is ensured. In addition, large pools of machines can completely change their configuration, applications, and even OS during a reboot operation.
The Citrix Personal vDisk (PvDisk or PvD) feature enables users to preserve customization settings and user-installed applications in a pooled desktop by redirecting the changes from the user’s pooled virtual machine to a separate Personal vDisk. During runtime, the content of the Personal vDisk is blended with the content from the base virtual machine to provide a unified experience to the end user. The Personal vDisk data is preserved during reboot and refresh operations.
Citrix Profile Management preserves user profiles and dynamically synchronizes them with a remote profile repository. Profile Management downloads a user’s remote profile dynamically when the user logs in to XenDesktop, and applies personal settings to desktops and applications regardless of the user’s login location or client device.
The combination of Profile Management and pooled desktops provides the experience of a dedicated desktop while potentially minimizing the amount of storage required in an organization.
Virtualization layer
VMware vSphere is the leading virtualization platform in the industry. It provides flexibility and cost savings by enabling the consolidation of large, inefficient server farms into nimble, reliable infrastructures. The core VMware vSphere components are the VMware vSphere hypervisor and VMware vCenter Server for system management.
This solution uses VMware vSphere Desktop Edition, which is intended for customers who want to purchase vSphere licenses for desktop virtualization only. vSphere Desktop provides the full range of features and functionalities of the vSphere Enterprise Plus edition, enabling customers to achieve scalability, high availability,
Citrix Provisioning Services
Citrix Personal vDisk
Citrix Profile Management
VMware vSphere
Chapter 3: Solution Overview
22 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
and optimal performance for all of their desktop workloads. vSphere Desktop also comes with unlimited vRAM entitlement.
VMware vCenter Server is a centralized platform for managing vSphere environments. It provides administrators with a single interface for all aspects of monitoring, managing, and maintaining the virtual infrastructure and can be accessed from multiple devices.
vCenter is also responsible for managing advanced features such as vSphere High Availability (HA), vSphere Distributed Resource Scheduler (DRS), vSphere vMotion, and vSphere Update Manager.
VMware vSphere High Availability (HA) provides uniform, cost-effective failover protection against hardware and OS outages:
If the virtual machine OS has an error, the virtual machine can be automatically restarted on the same hardware.
If the physical hardware has an error, the impacted virtual machines can be automatically restarted on other servers in the cluster.
With vSphere HA, you can configure policies to determine which machines are restarted automatically and under what conditions these operations should be performed.
VMware vShield Endpoint offloads virtual desktop antivirus and antimalware scanning operations to a dedicated secure virtual appliance delivered by VMware partners. Offloading scanning operations improves desktop consolidation ratios and performance by eliminating antivirus storms, streamlining antivirus and antimalware deployment, and monitoring and satisfying compliance and audit requirements through detailed logging of antivirus and antimalware activities.
Compute layer
VSPEX defines the minimum amount of compute layer resources required, but allows the customer to implement the requirements using any server hardware that meets these requirements. For details, refer to Chapter 5.
Network layer
VSPEX defines the minimum number of network ports required for the solution and provides general guidance on network architecture, but allows the customer to implement the requirements using any network hardware that meets these requirements. For details, refer to Chapter 5.
VMware vCenter Server
VMware vSphere High Availability
VMware vShield Endpoint
Chapter 3: Solution Overview
23 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Storage layer
The storage layer is a key component of any cloud infrastructure solution that serves data generated by applications and operating systems in a data center storage processing system. This VSPEX solution uses EMC XtremIO storage arrays to provide virtualization at the storage layer. The XtremIO platform provides the required storage performance, increases storage efficiency and management flexibility, and reduces total cost of ownership. This solution also uses the EMC Isilon or VNX arrays to provide storage for user data.
The EMC XtremIO All-Flash Array is an all new design with a revolutionary architecture. It brings together all the necessary requirements to enable the agile data center: linear scale-out and highly available and rich data center services for the workloads.
The basic hardware building block for these scale-out arrays is the X-Brick. Each X-Brick is made up of two active-active controller nodes and a disk array enclosure packaged together with no single point of failure. The Starter X-Brick with 13 SSDs can be non-disruptively expanded to a full X-Brick with 25 SSDs without any downtime. Up to six X-Bricks can be combined in single a scale-out cluster to increase performance and capacity in a linear fashion.
The XtremIO platform is designed to maximize the use of flash storage media. Key attributes of this platform are:
Incredibly high levels of I/O performance, particularly for random I/O workloads that are typical in virtualized environments
Consistently low (sub-millisecond) latency
True inline data reduction—the ability to remove redundant information in the data path and write only unique data on the storage array, thus lowering the amount of capacity required
A full suite of enterprise array capabilities, such as integration with VMware through VAAI, N-way active controllers, high availability, strong data protection, and thin provisioning
XtremIO storage includes the following components:
Host adapter ports—Provide host connectivity through fabric into the array.
Storage controllers (SCs)—The compute component of the storage array. SCs handle all aspects of data moving into, out of, and between arrays.
Disk drives—Solid state drives (SSDs) that contain the host/application data and their enclosures.
Infiniband switches - A switched, high throughput, low latency, quality of service and failover capable, scalable computer network communications link used in multi-X-Brick configurations.
XtremIO Operating System (XIOS)
The XtremIO storage cluster is managed by XtremIO Operating System (XIOS), XtremIO’s powerful operating system. XIOS ensures that the system remains balanced and always delivers the highest levels of performance without any administrator intervention.
EMC XtremIO
Chapter 3: Solution Overview
24 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Ensures that all SSDs in the system are evenly loaded, providing both the highest possible performance as well as endurance that stands up to demanding workloads for the entire life of the array.
Eliminates the need to perform the complex configuration steps found on traditional arrays. There is no need to set RAID levels, determine drive group sizes, set stripe widths, set caching policies, build aggregates, or set any other configuration parameters that require specialized storage skills.
Automatically and optimally configures every volume at all times. I/O performance on existing volumes and data sets automatically increases with large cluster sizes. Every volume is capable of receiving the full performance potential of the entire XtremIO system.
Standards-based enterprise storage system
The XtremIO system interfaces with vSphere hosts using standard FC and iSCSI block interfaces. The system supports complete high-availability features, including support for native VMware multipath I/O, protection against failed SSDs, non-disruptive software and firmware upgrades, no single point of failure (SPOF), and hot-swappable components.
Real-time, inline data reduction
The XtremIO storage system deduplicates and compresses data, including desktop images, in real time, allowing a massive number of virtual desktops to reside in a small and economical amount of flash capacity. Furthermore, data reduction on the XtremIO array does not adversely affect input/output per second (IOPS) or latency performance; rather it enhances the performance of the end-user computing environment.
Scale-out design
The X-Brick is the fundamental building block of a scaled out XtremIO clustered system. Using a Starter X-Brick, virtual desktop deployments can start small (up to 1,750 virtual desktops) and grow to nearly any scale required by upgrading the Starter X-Brick to an X-Brick, and then configuring a larger XtremIO cluster if required. The system expands capacity and performance linearly as building blocks are added, making EUC sizing and management of future growth extremely simple.
VAAI integration
The XtremIO array is fully integrated with vSphere through vStorage APIs for Array Integration (VAAI). All API commands are supported, including ATS, Clone Blocks/Full Copy/XCOPY, Zero Blocks/Write Same, Thin Provisioning, and Block Delete. This, in combination with the array’s inline data reduction and in-memory metadata management, enables nearly instantaneous virtual machine provisioning and cloning and makes it possible to use large volume sizes for management simplicity.
Massive performance
The XtremIO array is designed to handle very high, sustained levels of small, random, mixed read and write I/O as is typical in virtual desktops, and to do so with consistent extraordinarily low latency.
Chapter 3: Solution Overview
25 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Fast provisioning
XtremIO arrays deliver the industry’s first writeable snapshot technology that is space-efficient for both data and metadata. XtremIO snapshots are free from limitations of performance, features, topology, or capacity reservations. With their unique in-memory metadata architecture, XtremIO arrays can instantly clone desktop environments of any size.
Ease of use
The XtremIO storage system requires only a few basic setup steps that can be completed in minutes and absolutely no tuning or ongoing administration in order to achieve and maintain high performance levels. In fact, the XtremIO system can be deployment ready in less than one hour after delivery.
Security with Data at Rest Encryption (D@RE)
XtremIO arrays securely encrypt all data stored on the all-flash array, delivering protection – especially for persistent virtual desktops – for regulated use cases in sensitive industries such as healthcare, finance, and the government.
Data center economics
Up to 3,500 desktops are easily supported on an X-Brick, 1,750 on a Starter X-Brick, requiring just a few rack units of space and approximately 750 W of power.
EMC Isilon scale-out network attached storage (NAS) is ideal for storing large amounts of user data and Windows profiles in a VMware Horizon View infrastructure. It provides a simple, scalable, and efficient platform to store massive amounts of unstructured data and enable various applications to create a scalable and accessible data repository without the overhead associated with traditional storage systems. Key attributes of the Isilon platform are:
Isilon is Multi-Protocol, supporting NFS, CIFS, HTTP, FTP, HDFS for Hadoop and Data Analytics, and REST for Object and Cloud computing requirements.
At the Client/Application layer, the Isilon NAS architecture supports a wide range of operating system environments, as shown here.
At the Ethernet level, Isilon utilizes a 10 GbE network.
Isilon’s OneFS operating system is a single file system/single volume architecture, which makes it extremely easy to manage, regardless of the number of nodes in the storage cluster.
Isilon storage systems scale from a minimum of three nodes up to 144 nodes, all of which are connected by an Infiniband communications layer.
EMC Isilon
Chapter 3: Solution Overview
26 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Figure 5. Isilon cluster components
Isilon OneFS
The Isilon OneFS operating system provides the intelligence behind all Isilon scale-out storage systems. It combines the three layers of traditional storage architectures—file system, volume manager, and data protection—into one unified software layer, creating a single intelligent file system that spans all nodes within an Isilon cluster.
Figure 6. EMC Isilon OneFS operating system functionality
OneFS provides a number of important advantages:
Simple to Manage as a result of Isilon’s single file system, single volume, global namespace architecture
Massive Scalability with the ability to scale to 20 PB in a single volume
Unmatched Efficiency with over 80% storage utilization, automated storage tiering, and Isilon SmartDedupe
Enterprise data protection including efficient backup and disaster recovery, and N+1 thru N+4 redundancy
Robust security and compliance options with:
Role-based access control
Secure Access Zones
SEC 17a-4 compliant WORM data security
Chapter 3: Solution Overview
27 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
D@RE with Self-Encrypting Drives (SEDs) option
Integrated File System Auditing support
Operational Flexibility with multi-protocol support including native HDFS support; Syncplicity® support for secure mobile computing; and support for object and cloud computing including OpenStack Swift.
Isilon offers a full suite of data protection and management software to help you protect your data assets, control costs, and optimize storage resources and system performance for your Big Data environment.
Data protection
SnapshotIQ: to protect data efficiently and reliably with secure, near instantaneous snapshots while incurring little to no performance overhead, and speed recovery of critical data with near-immediate, on-demand snapshot restores
SyncIQ: to replicate and distribute large, mission-critical data sets to multiple shared storage systems in multiple sites for reliable disaster recovery capability
SmartConnect: to enable client connection load balancing and dynamic NFS failover and failback of client connections across storage nodes to optimize use of cluster resources
SmartLock: to protect your critical data against accidental, premature, or malicious alteration or deletion with Isilon’s software-based approach to write once-read many (WORM) and meet stringent compliance and governance needs such as SEC 17a-4 requirements
Data management
SmartPools: to implement a highly efficient, automated tiered storage strategy to optimize storage performance and costs
SmartDedupe: for data deduplication to reduce storage capacity requirements and associated costs by up to 35% without impacting performance
SmartQuotas: to assign and manage quotas that seamlessly partition and thin provision storage into easily managed segments at the cluster, directory, sub-directory, user, and group levels
InsightIQ: to gain innovative performance monitoring and reporting tools that can help you maximize performance of your Isilon scale-out storage system
Isilon for vCenter: to manage Isilon storage functions from vCenter
Isilon Scale-out NAS Product Family
The available Isilon nodes today are broken into several classes, according to their functionality:
S-Series: IOPS-intensive applications
X-Series: High-concurrency and throughput-driven workflows
NL-Series: Near-primary accessibility, with near-tape value
Performance Accelerator: Independent scaling for ultimate performance
Chapter 3: Solution Overview
28 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Backup Accelerator: High-speed and scalable backup and restore solution
Figure 7. Isilon node classes
The EMC VNX flash-optimized unified storage platform is ideal for storing user data and Windows profiles in a Citrix XenDesktop infrastructure, and delivers innovation and enterprise capabilities for file, block, and object storage in a single, scalable, and easy-to-use solution. Ideal for mixed workloads in physical or virtual environments, VNX combines powerful and flexible hardware with advanced efficiency, management, and protection software to meet the demanding needs of today’s virtualized application environments.
VNX storage includes the following components:
Host adapter ports (for block)—Provide host connectivity through fabric into the array.
Data Movers (for file)—Front-end appliances that provide file services to hosts (optional if providing CIFS/SMB or NFS services).
Storage processors (SPs)—The compute component of the storage array. SPs handle all aspects of data moving into, out of, and between arrays.
Disk drives—Disk spindles and solid state drives (SSDs) that contain the host/application data and their enclosures.
Note: The term Data Mover refers to a VNX hardware component, which has a CPU, memory, and input/output (I/O) ports. It enables the CIFS (SMB) and NFS protocols on the VNX array.
EMC VNX
Chapter 3: Solution Overview
29 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
EMC VNX series
VNX includes many features and enhancements designed and built on the first generation’s success. These features and enhancements include:
More capacity and better optimization with VNX MCx™ technology components: with Multicore Cache, Multicore RAID, and multicore FAST Cache
Greater efficiency with a flash-optimized hybrid array
Better protection by increasing application availability with active/active storage processors
Easier administration and deployment with the new Unisphere® Management Suite
VSPEX is built with VNX to deliver even greater efficiency, performance, and scale than ever before.
Flash-optimized hybrid array
VNX is a flash-optimized hybrid array that provides automated tiering to deliver the best performance to your critical data, while intelligently moving less frequently accessed data to lower-cost disks.
In this hybrid approach, a small percentage of flash drives in the overall system can provide a high percentage of the overall IOPS. Flash-optimized VNX takes full advantage of the low latency of flash to deliver cost-saving optimization and high performance scalability. EMC Fully Automated Storage Tiering Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives and boosts the most active data to the flash drives, ensuring that customers never have to make concessions for cost or performance.
Data generally is accessed most frequently at the time it is created; therefore, new data is first stored on flash drives to provide the best performance. As the data ages and becomes less active over time, FAST VP tiers the data from high-performance to high-capacity drives automatically, based on customer-defined policies. This functionality has been enhanced with four times better granularity and with new FAST VP solid-state disks (SSDs) based on enterprise multilevel cell (eMLC) technology to lower the cost per gigabyte.
FAST Cache uses flash drives as an expanded cache layer for the array to dynamically absorb unpredicted spikes in system workloads. Frequently accessed data is copied to the FAST Cache in 64 KB increments. Subsequent reads and/or writes to the data chunk are serviced by FAST Cache. This enables immediate promotion of very active data to flash drives, dramatically improving the response times for the active data and reducing data hot spots that can occur within the LUN.
All VSPEX use cases benefit from the increased efficiency provided by the FAST Suite. Furthermore, VNX provides out-of-band, block-based deduplication that can dramatically lower the costs of the flash tier.
Chapter 3: Solution Overview
30 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Unisphere Management Suite
EMC Unisphere® is the central management platform for the VNX series, providing a single, combined view of file and block systems, with all features and functions available through a common interface. Unisphere is optimized for virtual applications and provides industry-leading VMware integration, automatically discovering virtual machines and ESX servers and providing end-to-end, virtual-to-physical mapping. Unisphere also simplifies configuration of FAST Cache and FAST VP on VNX platforms.
The new Unisphere Management Suite extends the easy-to-use interface of Unisphere to include VNX Monitoring and Reporting for validating performance and anticipating capacity requirements. As shown in Figure 8, the suite also includes Unisphere Remote for centrally managing thousands of VNX and VNXe systems with new support for XtremCache.
Figure 8. New Unisphere Management Suite
VMware Storage APIs for Storage Awareness
VMware vSphere Storage API for Storage Awareness (VASA) is a VMware-defined API that displays storage information through vCenter. Integration between VASA technology and VNX makes storage management in a virtualized environment a seamless experience.
EMC VNX Virtual Provisioning
EMC VNX Virtual Provisioning™ enables organizations to reduce storage costs by increasing capacity utilization, simplifying storage management, and reducing application downtime. Virtual Provisioning also helps companies to reduce power and cooling requirements and reduce capital expenditures.
Virtual Provisioning provides pool-based storage provisioning by implementing pool LUNs that can be either thin or thick. Thin LUNs provide on-demand storage that maximizes the utilization of your storage by allocating storage only as needed. Thick LUNs provide high performance and predictable performance for your applications. Both types of LUNs benefit from the ease-of-use features of pool-based provisioning.
Chapter 3: Solution Overview
31 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Pools and pool LUNs are the building blocks for advanced data services such as FAST VP, VNX Snapshots, and compression. Pool LUNs also support a variety of additional features, such as LUN shrink, online expansion, and user capacity threshold setting.
VNX file shares
In many environments, it is important to have a common location in which to store files accessed by many users. CIFS or NFS file shares, which are available from a file server, provide this ability. VNX storage arrays can provide this service along with centralized management, client integration, advanced security options, and efficiency improvement features. For more information about VNX file shares, refer to EMC VNX Series: Configuring and Managing CIFS on VNX on EMC Online Support.
EMC SnapSure
EMC SnapSure™ is an EMC VNX File software feature that enables you to create and manage checkpoints that are point-in-time logical images of a production file system (PFS). SnapSure uses a copy-on-first-modify principle. A PFS consists of blocks; when a block within the PFS is modified, a copy containing the block's original contents is saved to a separate volume called the SavVol.
Subsequent changes made to the same block in the PFS are not copied into the SavVol. SnapSure reads the original blocks from the PFS in the SavVol, and the unchanged PFS blocks remaining in the PFS, according to a bitmap and blockmap data-tracking structure. These blocks combine to provide a complete point-in-time image called a checkpoint.
A checkpoint reflects the state of a PFS at the time the checkpoint is created. SnapSure supports the following checkpoint types:
Read-only checkpoints—Read-only file systems created from a PFS
Writeable checkpoints—Read/write file systems created from a read-only checkpoint
SnapSure can maintain a maximum of 96 read-only checkpoints and 16 writeable checkpoints per PFS, while allowing PFS applications continued access to real-time data.
Note: Each writeable checkpoint is associated with a read-only checkpoint, referred to as the baseline checkpoint. Each baseline checkpoint can have only one associated writeable checkpoint.
The EMC document Using VNX SnapSure, available on emc.com, provides more details.
VMware Virtual Storage Integrator (VSI) for VMware vSphere Web Client
The EMC® Virtual Storage Integrator (VSI) for VMware vSphere Web Client is a plug-in for VMware vCenter. It enables administrators to view, manage, and optimize storage for VMware ESX/ESXi servers and hosts and then map that storage to the hosts.
Virtualization management
Chapter 3: Solution Overview
32 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
VSI consists of a graphical user interface and the EMC Solutions Integration Service (SIS), which provides communication and access to the storage systems. Depending on the platform, tasks that you can perform with VSI include:
Storage provisioning
Cloning
Block deduplication
Compression
Storage mapping
Capacity monitoring
Virtual desktop infrastructure (VDI) integration
Using the Storage Access feature, a storage administrator can enable virtual machine administrators to perform management tasks on a set of storage pools.
The current version of VSI supports the following EMC storage systems and features:
EMC ViPR™ software-defined storage
View properties of NFS and VMFS datastores and RDM volumes
Provision NFS and VMFS datastores and RDM volumes
EMC VNX® storage for ESX/ESXi hosts
View properties of NFS and VMFS datastores and RDM volumes
Provision NFS and VMFS datastores and RDM volumes
Compress and decompress storage system objects on NFS and VMFS datastores
Enable and disable block deduplication on VMFS datastores
Create fast clones and full clones of virtual machines on NFS datastores
EMC Symmetrix® VMAX® storage systems
View properties of VMFS datastores and RDM volumes
Provision VMFS datastores and RDM volumes
EMC XtremIO® storage systems
View properties of ESX/ESXi datastores and RDM disks
Provision VMFS datastores and RDM volumes
Create full clones using XtremIO native snapshots
Integrate with VMware Horizon View and Citrix XenDesktop
Refer to the EMC VSI for VMware vSphere product guides on EMC Online Support for more information.
Chapter 3: Solution Overview
33 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Data protection layer
Backup and recovery provides data protection by backing up data files or volumes to defined schedules and restoring data from the backup if recovery is needed after a disaster. EMC Avamar delivers the protection confidence needed to accelerate deployment of VSPEX end-user computing solutions.
Avamar empowers administrators to centrally back up and manage policies and end-user computing infrastructure components, while allowing end users to efficiently recover their own files from a simple and intuitive web-based interface. By moving only new, unique sub-file data segments, Avamar delivers fast full backups daily, with up to 90 percent reduction in backup times, while reducing the required daily network bandwidth by up to 99 percent. All Avamar recoveries are single-step for simplicity.
With Avamar, you can choose to back up virtual desktops using either image-level or guest-based operations. Avamar runs the deduplication engine at the virtual machine disk (VMDK) level for image backup and at the file -level for guest-based backups.
Image-level protection enables backup clients to make a copy of all the virtual disks and configuration files associated with the particular virtual desktop in the event of hardware failure, corruption, or accidental deletion. Avamar significantly reduces the backup and recovery time of the virtual desktop by using change block tracking (CBT) on both backup and recovery.
Guest-based protection runs like traditional backup solutions. Guest-based backup can be used on any virtual machine running an OS for which an Avamar backup client is available. It enables fine-grained control over the content and inclusion and exclusion patterns. This can be used to prevent data loss due to user errors, such as accidental file deletion. Installing the desktop/laptop agent on the system to be protected enables end-user, self-service recovery of data.
Security layer
RSA SecurID two-factor authentication can provide enhanced security for the VSPEX end-user computing environment by requiring the user to authenticate with two pieces of information, collectively called a passphrase. SecurID functionality is managed through RSA Authentication Manager, which also controls administrative functions such as token assignment to users, user management, and high availability.
The Securing EMC VSPEX End-User Computing with RSA SecurID: Citrix XenDesktop 7 and VMware vSphere 5.1 for up to 2,000 Virtual Desktops Design Guide provides details for planning the security layer.
Citrix ShareFile StorageZones solution
Citrix ShareFile is a cloud-based file sharing and storage service for enterprise-class storage and security. ShareFile enables users to securely share documents with other
Chapter 3: Solution Overview
34 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
users. ShareFile users include employees and users who are outside of the enterprise directory (referred to as clients).
ShareFile StorageZones enables businesses to share files across the organization while meeting compliance and regulatory concerns. StorageZones enables customers to keep their data on on-premises storage systems. It facilitates sharing of large files with full encryption and provides the ability to synchronize files with multiple devices.
By keeping data on the premises and closer to users than data residing on the public cloud, StorageZones can provide improved performance and security.
The main features available to ShareFile StorageZones users are:
Use of StorageZones with or instead of ShareFile-managed cloud storage.
Ability to configure Citrix CloudGateway Enterprise to integrate ShareFile services with Citrix Receiver for user authentication and user provisioning.
Automated reconciliation between the ShareFile cloud and an organization’s StorageZones deployment.
Automated antivirus scans of uploaded files.
File recovery from Storage Center backup (Storage Center is the server component of StorageZones). StorageZones enables you to browse the file records for a particular date and time and tag any files and folders to restore from Storage Center backup.
With additional infrastructure, the VSPEX end-user computing for Citrix XenDesktop solution supports ShareFile StorageZones with Storage Center.
Chapter 4: Sizing the Solution
35 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Chapter 4 Sizing the Solution
This chapter presents the following topics:
Overview .................................................................................................................. 36
Reference workload.................................................................................................. 36
VSPEX Private Cloud requirements ........................................................................... 37
VSPEX XtremIO array configurations ........................................................................ 38
Isilon configuration .................................................................................................. 39
VNX array configurations ......................................................................................... 40
Choosing the appropriate reference architecture ..................................................... 41
Chapter 4: Sizing the Solution
36 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Overview
This chapter describes how to design a VSPEX End-User Computing for Citrix XenDesktop solution and how to size it to fit the customer’s needs. It introduces the concepts of a reference workload, building blocks, and validated end-user computing maximums, and describes how to use these to design your solution. Table 4 outlines the high-level steps you need to complete when sizing the solution.
Table 4. VSPEX end-user computing: Design process
Step Action
1 Use the Customer Sizing Worksheet in Appendix A to collect the customer requirements for the end-user computing environment.
2 Use the EMC VSPEX Sizing Tool to determine the recommended VSPEX reference architecture for your end-user computing solution, based on the customer requirements collected in Step 1.
Note: If the Sizing Tool is not available, you can manually size the end-user computing solution using the guidelines in this chapter.
Reference workload
VSPEX defines a reference workload to represent a unit of measure for quantifying the resources in the solution reference architectures. By comparing the customer’s actual usage to this reference workload, you can determine which reference architecture to choose as the basis for the customer’s VSPEX deployment.
For VSPEX end-user computing solutions, the reference workload is defined as a single virtual desktop—the reference virtual desktop—with the workload characteristics listed in Table 5.
To determine the equivalent number of reference virtual desktops for a particular resource requirement, use the VSPEX Customer Sizing Worksheet to convert the total actual resources required for all desktops into the reference virtual desktop format.
Table 5. Reference virtual desktop characteristics
Characteristic Value
Desktop OS (VDI) OS type Windows 7 Enterprise Edition (32-bit)
Windows 8.1 Enterprise Edition (32-bit)
Server OS (HSD) OS type Windows Server 2012 R2
Virtual processors per virtual desktop 1
RAM per virtual desktop 2 GB
Average IOPS per virtual desktop at steady state
10
Internet Explorer 10
Microsoft Office 2010
Chapter 4: Sizing the Solution
37 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Characteristic Value
Adobe Reader XI
Adobe Flash Player 11 ActiveX
Doro PDF printer 1.8
Workload generator Login VSI 4.1.2
Workload type officeworker
This desktop definition is based on user data that resides on shared storage. The I/O profile is defined by using a test framework that runs all desktops concurrently with a steady load generated by the constant use of office-based applications such as browsers and office productivity software.
This solution is verified with performance testing conducted using Login VSI, which is the industry-standard load testing solution for virtualized desktop environments.
Login VSI provides proactive performance management solutions for virtualized desktop and server environments. Enterprise IT departments use Login VSI products in all phases of their virtual desktop deployment—from planning to deployment to change management—for more predictable performance, higher availability, and a more consistent end user experience. The world's leading virtualization vendors use the flagship product, Login VSI, to benchmark performance. With minimal configuration, Login VSI products work in VMware Horizon View, Citrix XenDesktop and XenApp, Microsoft Remote Desktop Services (Terminal Services), and any other Windows-based virtual desktop solution.
For more information, download a trial at www.loginvsi.com.
VSPEX Private Cloud requirements
This VSPEX End User Computing Proven Infrastructure requires multiple application servers. Unless otherwise specified, all servers use Microsoft Windows Server 2012 R2 as the base OS. Table 6 lists the minimum requirements of each infrastructure server required.
Table 6. Infrastructure server minimum requirements
Server CPU RAM IOPS Storage capacity
Domain controllers (each) 2 vCPUs 4 GB 25 32 GB
SQL Server 2 vCPUs 6 GB 100 200 GB
vCenter Server 4 vCPUs 8 GB 100 80 GB
Citrix XenDesktop Controllers (each)
2 vCPUs 8 GB 50 32 GB
Citrix PVS Servers (each) 4 vCPUs 20 GB 75 150 GB
Login VSI
Chapter 4: Sizing the Solution
38 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Citrix ShareFile StorageZones solution provides the requirements for the optional Citrix ShareFile component.
This solution requires a 1.5 TB volume to host the infrastructure virtual machines, which can include the VMware vCenter Server, Citrix XenDesktop Controllers, Citrix PVS servers, optional Citrix ShareFile servers, Microsoft Active Directory Server, and Microsoft SQL Server.
VSPEX XtremIO array configurations
We validated the VSPEX XtremIO end-user computing configurations on the two types of XtremIO X-Bricks, Starter X-Brick and X-Bricks, which vary according to the number of SSDs they include, and their total available capacity. For each array, EMC recommends a maximum VSPEX end-user computing configuration as outlined in this section.
The following XtremIO validated disk layouts provide support for a specified number of virtual desktops at a defined performance level. This VSPEX solution supports two XtremIO X-Brick configurations, which are selected based on the number of desktops being deployed:
XtremIO Starter X-Brick—The XtremIO Starter X-Brick includes 13 SSD drives, and is validated to support up to 1,750 virtual desktops.
XtremIO X-Brick—The XtremIO X-Brick includes 25 SSD drives, and is validated to support up to 3,500 virtual desktops.
The XtremIO storage configuration required for this solution is in addition to the storage required by the VSPEX private cloud that supports the solution’s infrastructure services. For more information about the VSPEX private cloud storage pool, refer to the VSPEX Proven Infrastructure Guide in Essential reading.
Table 7 shows the number and size of the XtremIO volumes the solution uses to present to the vSphere servers as a VMFS datastore for virtual desktop storage. Two datastore configurations are listed for each desktop type: one that includes the space required to use the Citrix Personal vDisk (PvD) feature, and one that does not for solutions that will not use that component of Citrix XenDesktop. Please note that when deploying Citrix desktops with PVS or PvD, the following values are configured by default:
PVS write cache disk – 6 GB
Citrix Personal vDisk (PvD) – 10 GB
If either of these values is changed from the default, the datastore sizes must also be changed as a result.
Private cloud storage layout
Validated XtremIO configurations
XtremIO storage layout
Chapter 4: Sizing the Solution
39 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Table 7. XtremIO storage layout
XtremIO configuration
Number of desktops
Type of desktop Number of volumes
Volume size
Starter X-Brick 1,750
PVS streamed
7
2,500 GB
PVS with PvD streamed
5,000 GB
MCS 14
750 GB
MCS with PvD 2,000 GB
X-Brick 3,500
PVS streamed
14
2,500 GB
PVS with PvD streamed
5,000 GB
MCS
28
750 GB
MCS with PvD linked-clone
2,000 GB
The EMC VSPEX End User Computing solution supports a flexible implementation model where it is easy to expand your environment as the needs of the business change.
To support future expansion, the XtremIO Starter X-Brick can be non-disruptively upgraded to an X-Brick by installing the XtremIO expansion kit, which adds an additional twelve 400 GB SSD drives. The resulting X-Brick supports up to 3,500 desktops.
To support more than 3500 reference virtual desktops, XtremIO supports scaling out online by adding more X-Bricks. Each additional X-Brick increases performance and virtual desktop capacity linearly. Two X-Brick, four X-Brick, or six X-Brick XtremIO clusters are all valid configurations.
Isilon configuration
This solution uses the EMC Isilon system for storing user data, home directories, and profiles. A three-node Isilon cluster is used to support 2,500 users’ data with the reference workload validated in this solution. Each node has 36 drives (2 EFD and 34 SATA) and two 10 GbE Ethernet ports. Table 8 provides detailed information.
Table 8. User data resource requirement on Isilon
Number of reference
virtual desktops
Isilon configuration Max capacity/User (GB) Number of
nodes Node type
1~2,500 3 X410 36
2,501~3,500 4 X410 35
Expanding existing VSPEX end-user computing environments
Chapter 4: Sizing the Solution
40 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Number of reference
virtual desktops
Isilon configuration Max capacity/User (GB) 3,501~5,000 5 X410 30
Table 8 shows the recommendation for Isilon configuration with the total number of CIFS calls as the fulfillment baseline. Each X410 node used in this solution can provide 30 TB useable capacity. If more capacity per user is needed, additional nodes can be added.
This solution is also capable of supporting other Isilon node types. Refer to the VSPEX Sizing Tool or check with your EMC sales representative for more information.
VNX array configurations
This solution also supports using VNX series storage arrays for user data storage, with FAST Cache enabled for the related storage pools. The VNX5400™ can support up to 1,750 users with the reference workload validated in this solution. The VNX5600™ can support up to 3,500 users with the reference workload. Table 9 shows the detailed requirements for 1,250 – 3,500 users.
Table 9 shows recommendation of VNX configuration with total CIFS calls as fulfillment baseline. Each 6+2 2 TB NL-SAS RAID 6 group used in this solution can provide 10 TB useable capacity, add more 6+2 2 TB NL-SAS RAID 6 group if more capacity per user is needed.
Table 9. User data resource requirement on VNX
Number of users
VNX model SSD for
FAST Cache Number of 2 TB NL-SAS drives
Max capacity/User
(GB)
1,250 5400 2 16 15
1,750 5400 2 32 22
2,500 5600 4 40 19
3,500 5600 4 48 17
Refer to the VSPEX Sizing Tool or check with your EMC sales representative for more information about larger scale.
If multiple drive types have been implemented, FAST VP can be enabled to automatically tier data to balance differences in performance and capacity.
Note: FAST VP can provide performance improvements when implemented for user data and roaming profiles.
The virtual desktops use four shared file systems—two for the Citrix XenDesktop Profile Management repositories and two to redirect user storage that resides in home directories. In general, redirecting users’ data out of the base image to VNX for
User data storage VNX building block
EMC FAST VP
VNX shared file systems
Chapter 4: Sizing the Solution
41 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
File enables centralized administration and data protection and makes the desktops more stateless. Each file system is exported to the environment through a CIFS share. Each Persona Management repository share and home directory share serves an equal number of users.
Choosing the appropriate reference architecture
To choose the appropriate reference architecture for a customer environment, you must determine the resource requirements of the environment and then translate these requirements to an equivalent number of reference virtual desktops that have the characteristics defined in Table 10. This section describes how to use the Customer Sizing Worksheet to simplify the sizing calculations as well as additional factors you should take into consideration when deciding which architecture to deploy.
The Customer Sizing Worksheet helps you to assess the customer environment and calculate the sizing requirements of the environment.
Table 10 shows a completed worksheet for a sample customer environment. Appendix A provides a blank Customer Sizing Worksheet that you can print out and use to help size the solution for a customer.
Table 10. Example Customer Sizing Worksheet
User type vCPUs Memory IOPS Equivalent reference virtual desktops
No. of users
Total reference desktops
Heavy users
Resource requirements 2 8 GB 12 --- --- ---
Equivalent reference virtual desktops
2 4 2 4 200 800
Moderate users
Resource requirements 2 4 GB 8 --- --- ---
Equivalent reference virtual desktops
2 2 1 2 200 400
Typical users
Resource requirements 1 2 GB 8 --- --- ---
Equivalent reference virtual desktops
1 1 1 1 1200 1200
Total 2400
To complete the Customer Sizing Worksheet, follow these steps:
1. Identify the user types planned for migration into the VSPEX end-user computing environment and the number of users of each type.
2. For each user type, determine the compute resource requirements in terms of vCPUs, memory (GB), storage performance (IOPS), and storage capacity.
Using the Customer Sizing Worksheet
Chapter 4: Sizing the Solution
42 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
3. For each resource type and user type, determine the equivalent reference virtual desktops requirements—that is, the number of reference virtual desktops required to meet the specified resource requirements.
4. Determine the total number of reference desktops needed from the resource pool for the customer environment.
Determining the resource requirements
CPU The reference virtual desktop outlined in Table 5 assumes that most desktop applications are optimized for a single CPU. If one type of user requires a desktop with multiple virtual CPUs, modify the proposed virtual desktop count to account for the additional resources. For example, if you virtualize 100 desktops, but 20 users require two CPUs instead of one, consider that your pool needs to provide 120 virtual desktops of capability.
Memory Memory plays a key role in ensuring application functionality and performance. Each group of desktops will have different targets for the available memory that is considered acceptable. Like the CPU calculation, if a group of users requires additional memory resources, simply adjust the number of planned desktops to accommodate the additional resource requirements.
For example, if there are 200 desktops to be virtualized, but each one needs 4 GB of memory instead of the 2 GB that the reference virtual desktop provides, plan for 400 reference virtual desktops.
IOPS The storage performance requirements for desktops are usually the least understood aspect of performance. The reference virtual desktop uses a workload generated by an industry-recognized tool to execute a wide variety of office productivity applications that should be representative of the majority of virtual desktop implementations.
Storage capacity The storage capacity requirement for a desktop can vary widely depending on the types of applications in use and specific customer policies. The virtual desktops in this solution rely on additional shared storage for user profile data and user documents. This requirement is an optional component that can be met by the addition of specific storage hardware defined in the solution. It can also be met by using existing file shares in the environment.
Determining the equivalent reference virtual desktops
With all of the resources defined, you determine the number of equivalent reference virtual desktops by using the relationships indicated in Table 11. Round all values up to the closest whole number.
Chapter 4: Sizing the Solution
43 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Table 11. Reference virtual desktop resources
Resource Value for reference virtual desktop
Relationship between requirements and equivalent reference virtual desktops
CPU 1 Equivalent reference virtual desktops = resource requirements
Memory 2 Equivalent reference virtual desktops = (resource requirements)/2
IOPS 10 Equivalent reference virtual desktops = (resource requirements)/10
For example, the heavy user type in Table 10 requires 2 virtual CPUs, 12 IOPS, and 8 GB of memory for each desktop. This translates to 2 reference virtual desktops of CPU, 4 reference virtual desktops of memory, and 2 reference virtual desktops of IOPS.
The number of reference virtual desktops required for each user type then equals the maximum required for an individual resource. For example, the number of equivalent reference virtual desktops for the heavy user type in Table 10 is four, as this number will meet the all resource requirements—IOPS, vCPU, and memory.
To calculate the total number of reference desktops for a user type, you multiply the number of equivalent reference virtual desktops for that user type by the number of users.
Determining the total reference virtual desktops
After the worksheet is completed for each user type that the customer wants to migrate into the virtual infrastructure, you compute the total number of reference virtual desktops required in the resource pool by calculating the sum of the total reference virtual desktops for all user types. In the example in Table 10, the total is 2,400 virtual desktops.
This VSPEX end-user computing reference architecture supports two separate points of scale, a Starter X-Brick capable of supporting up to 1,750 reference desktops, and an X-Brick capable of hosting up to 3,500 reference desktops. The total reference virtual desktops value from the completed Customer Sizing Worksheet can be used to verify that this reference architecture would be adequate for the customer requirements. In the example in Table 10, the customer requires 2,400 virtual desktops of capability from the pool. Therefore, this reference architecture provides sufficient resources for current needs as well as some room for growth.
However, there may be other factors to consider when verifying that this reference architecture will perform as intended. For example:
Concurrency
The reference workload used to validate this solution assumes that all desktop users will be active at all times. In other words, we tested this 3,500-desktop reference architecture with 3,500 desktops, all generating workload in parallel, all booted at the same time, and so on. If the customer expects to have 3,500 users, but only 50 percent of them will be logged on at any given time due to
Selecting a reference architecture
Chapter 4: Sizing the Solution
44 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
time zone differences or alternate shifts, the reference architecture may be able to support additional desktops in this case.
Heavier desktop workloads
The reference workload is considered a typical office worker load. However, some customers’ users might have a more active profile.
If a company has 3,500 users and, due to custom corporate applications, each user generates 50 predominantly write IOPS as compared to the 10 IOPS used in the reference workload, this customer will need 175,000 IOPS (3,500 users x 50 IOPS per desktop). This configuration would be underpowered in this case because the proposed IO load is greater than the array maximum of 100,000 write IOPS. This company would need to deploy an additional X-Brick, reduce their current IO load, or reduce the total number of desktops to ensure that the storage array performs as required.
In most cases, the Customer Sizing Worksheet suggests a reference architecture adequate for the customer‘s needs. However, in some cases you may want to further customize the hardware resources available to the system. A complete description of the system architecture is beyond the scope of this document but you can customize your solution further at this point.
Storage resources
The EMC XtremIO array is deployed in a specialized configuration known as an X-Brick. While additional X-Bricks can be added to increase the capacity or performance capabilities of the XtremIO cluster, this solution is based on a single X-Brick. The XtremIO array requires no tuning, and the number of SSDs available in the array are fixed. The VSPEX Sizing Tool or Customer Sizing Worksheet should be used to verify that the EMC XtremIO array can provided the necessary levels of capacity and performance.
Server resources
For the server resources in the solution, it is possible to customize the hardware resources more effectively. To do this, first total the resource requirements for the server components as shown in Table 12. Note the addition of the Total CPU resources and Total memory resources columns to the worksheet.
Table 12. Server resource component totals
User types vCPUs Memory (GB)
Number of users
Total CPU resources
Total memory resources
Heavy users
Resource requirements
2 8 200 400 1,600
Moderate users
Resource requirements
2 4 200 400 800
Typical users
Resource requirements
1 2 1,200 1,200 2,400
Total 2,000 4,800
Fine tuning hardware resources
Chapter 4: Sizing the Solution
45 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
The example in Table 12 requires 2,000 virtual vCPUs and 4,800 GB of memory. The reference architectures assume five desktops per physical processor core and no memory over-provisioning. This converts to 500 processor cores and 4,800 GB of memory for this example. Use these calculations to more accurately determine the total server resources required.
Note: Keep high availability requirements in mind when customizing the resource pool hardware.
EMC considers the requirements stated in this solution to be the minimum set of resources needed to handle the workloads defined for a reference virtual desktop. In any customer implementation, the load of a system can vary over time as users interact with the system. If the number of customer virtual desktops differs significantly from the reference definition and varies in the same resource group, you might need to add more of that resource to the system.
Summary
Chapter 5: Solution Design Considerations and Best Practices
46 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Chapter 5 Solution Design Considerations and Best Practices
This chapter presents the following topics:
Overview .................................................................................................................. 47
Server design considerations ................................................................................... 47
Network design considerations ................................................................................ 53
Storage design considerations ................................................................................ 58
High availability and failover ................................................................................... 59
Validation test profile .............................................................................................. 62
EMC Data Protection configuration guidelines ......................................................... 63
Chapter 5: Solution Design Considerations and Best Practices
47 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Overview
This chapter describes best practices and considerations for designing the VSPEX end-user computing solution. For more information on deployment best practices of various components of the solution, refer to the vendor-specific documentation.
Server design considerations
VSPEX solutions are designed to run on a wide variety of server platforms. VSPEX defines the minimum CPU and memory resources required, but not a specific server type or configuration. The customer can use any server platform and configuration that meets or exceeds the minimum requirements.
For example, Figure 9 shows how a customer could implement the same server requirements by using either white-box servers or high-end servers. Both implementations achieve the required number of processor cores and amount of RAM, but the number and type of servers differ.
Figure 9. Compute layer flexibility
Chapter 5: Solution Design Considerations and Best Practices
48 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
The choice of a server platform is not only based on the technical requirements of the environment, but also on the supportability of the platform, existing relationships with the server provider, advanced performance and management features, and many other factors. For example:
From a virtualization perspective, if a system’s workload is well understood, features like memory ballooning and transparent page sharing can reduce the aggregate memory requirement.
If the virtual machine pool does not have a high level of peak or concurrent usage, you can reduce the number of vCPUs. Conversely, if the applications being deployed are highly computational in nature, you might need to increase the number of CPUs and the amount of memory.
The server infrastructure must meet the following minimum requirements:
Sufficient CPU cores and memory to support the required number and types of virtual machines
Sufficient network connections to enable redundant connectivity to the system switches
Sufficient excess capacity to enable the environment to withstand a server failure and failover
For this solution, EMC recommends that you consider the following best practices for the server layer:
Use identical server units—Use identical or at least compatible servers to ensure that they share similar hardware configurations. VSPEX implements hypervisor-level high-availability technologies that might require similar instruction sets on the underlying physical hardware. By implementing VSPEX on identical server units, you can minimize compatibility problems in this area.
Use recent processor technologies—For new deployments, use recent revisions of common processor technologies. It is assumed that these will perform as well as, or better than, the systems used to validate the solution.
Implement high availability to accommodate single server failures—Implement the high-availability features available in the virtualization layer to ensure that the compute layer has sufficient resources to accommodate at least single server failures. This will also allow you to implement minimal-downtime upgrades. High availability and failover provides further details.
Note: When implementing hypervisor layer high availability, the largest virtual machine you can create is constrained by the smallest physical server in the environment.
Monitor resource utilization and adapt as needed—In any running system, monitor the utilization of resources and adapt as needed.
For example, the reference virtual desktop and required hardware resources in this solution assume that there are no more than five virtual CPUs for each physical processor core (5:1 ratio). In most cases, this provides an appropriate level of resources for the hosted virtual desktops; however, this ratio may not
Server best practices
Chapter 5: Solution Design Considerations and Best Practices
49 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
be appropriate in all cases. EMC recommends monitoring CPU utilization at the hypervisor layer to determine if more resources are required and adding them as needed.
Table 13 identifies the server hardware and the configurations validated in this solution.
Table 13. Server hardware
Servers for virtual desktops
Configuration
CPU 1 vCPU per desktop (5 desktops per core)
350 cores across all servers for 1,750 virtual desktops
700 cores across all servers for 3,500 virtual desktops
Memory 2 GB RAM per virtual machine
3.5 TB RAM across all servers for 1,750 virtual desktops
7 TB RAM across all servers for 3,500 virtual machines
2 GB RAM reservation per vSphere host
Network 3 x 10 GbE NICs per blade chassis or 6 x 1 GbE NICs per standalone server
Notes:
The 5:1 vCPU to physical core ratio applies to the reference workload defined in this Design Guide. When deploying EMC Avamar, add CPU and RAM as needed for components that are CPU or RAM intensive. Refer to the relevant product documentation for information on Avamar resource requirements.
To support VMware vSphere High Availability (HA), you should add one additional server to the number of servers you deploy to meet the minimum requirements in Table 13.
VMware vSphere has a number of advanced features that help optimize performance and overall use of resources. This section describes the key features for memory management and considerations for using them with your VSPEX solution.
Figure 10 illustrates how a single hypervisor consumes memory from a pool of resources. vSphere memory management features such as memory over-commitment, transparent page sharing, and memory ballooning can reduce total memory usage and increase consolidation ratios in the hypervisor.
Validated server hardware
vSphere memory virtualization
Chapter 5: Solution Design Considerations and Best Practices
50 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Figure 10. Hypervisor memory consumption
Memory virtualization techniques allow the vSphere hypervisor to abstract physical host resources such as memory to provide resource isolation across multiple virtual machines and avoid resource exhaustion. In cases where advanced processors (such as Intel processors with EPT support) are deployed, memory abstraction takes place within the CPU. Otherwise, it occurs within the hypervisor itself by using a feature known as shadow page tables.
vSphere provides the following memory management techniques:
Memory over-commitment—Memory over-commitment occurs when more memory is allocated to virtual machines than is physically present in a VMware vSphere host. Using sophisticated techniques such as ballooning and transparent page sharing, vSphere is able to handle memory over-commitment without any performance degradation. However, if more memory is actively used than is present on the server, vSphere might resort to swapping portions of a virtual machine's memory.
Non-Uniform Memory Access (NUMA)—vSphere uses a NUMA load-balancer to assign a home node to a virtual machine. Memory access is local and provides the best performance possible, because memory for the virtual machine is
Chapter 5: Solution Design Considerations and Best Practices
51 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
allocated from the home node. Applications that do not directly support NUMA also benefit from this feature.
Transparent page sharing—Virtual machines running similar operating systems and applications typically have identical sets of memory content. Page sharing allows the hypervisor to reclaim the redundant copies and return them to the host’s free memory pool for reuse.
Memory compression—vSphere uses memory compression to store pages that would otherwise be swapped out to disk through host swapping, in a compression cache located in the main memory.
Memory ballooning—Memory ballooning relieves host resource exhaustion by allocating free pages from the virtual machine to the host for reuse with little or no impact on the application’s performance.
Hypervisor swapping—Hypervisor swapping causes the host to force arbitrary virtual machine pages out to disk.
For further information, refer to the VMware white paper Understanding Memory Resource Management in VMware vSphere 5.0.
Proper sizing and configuration of the solution requires care when configuring server memory. This section provides guidelines for allocating memory to virtual machines and considers vSphere overhead and virtual machine memory settings.
vSphere memory overhead
There is memory space overhead associated with virtualizing memory resources. This overhead has two components:
The system overhead for the VMkernel
Additional overhead for each virtual machine
The overhead for the VMkernel is fixed, whereas the amount of additional memory for each virtual machine depends on the number of virtual CPUs and the amount of memory configured for the guest OS.
Virtual machine memory settings
Figure 11 shows the memory settings parameters in a virtual machine, including:
Configured memory—Physical memory allocated to the virtual machine at the time of creation
Reserved memory—Memory that is guaranteed to the virtual machine
Touched memory—Memory that is active or in use by the virtual machine
Swappable—Memory that can be de-allocated from the virtual machine if the host is under memory pressure from other virtual machines using ballooning, compression, or swapping.
Memory configuration guidelines
Chapter 5: Solution Design Considerations and Best Practices
52 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Figure 11. Virtual machine memory settings
EMC recommends that you follow these best practices for virtual machine memory settings:
Do not disable the default memory reclamation techniques.
These lightweight processes provide flexibility with minimal impact to workloads.
Intelligently size memory allocation for virtual machines.
Over-allocation wastes resources, while under-allocation causes performance impacts that can affect other virtual machines’ sharing resources. Over-committing can lead to resource exhaustion if the hypervisor cannot procure memory resources.
In severe cases, when hypervisor swapping occurs, virtual machine performance is adversely affected. Having performance baselines of your virtual machine workloads assists in managing this situation.
Allocating memory to virtual machines
Adequate server capacity is required for two purposes in the solution:
To support the required infrastructure services such as authentication/ authorization, DNS, and database
For further details on the hosting requirements for these infrastructure services, refer to the VSPEX Private Cloud Proven Infrastructure Guide listed in Essential reading.
To support the virtualized desktop infrastructure
In this solution, 2 GB of memory is assigned to each virtual machine, as defined in Table 5. The solution was validated with statically assigned memory and no over-commitment of memory resources. If memory over-commitment is used in a real-world environment, regularly monitor the system memory utilization and associated page file I/O activity to ensure that a memory shortfall does not cause unexpected results.
Chapter 5: Solution Design Considerations and Best Practices
53 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Network design considerations
VSPEX solutions define minimum network requirements and provide general guidance on network architecture while allowing the customer to choose any network hardware that meets the requirements. If additional bandwidth is needed, it is important to add capability at both the storage array and the hypervisor host to meet the requirements. The options for network connectivity on the server depend on the type of server.
For reference purposes in the validated environment, EMC assumes that each virtual desktop generates 10 I/Os per second with an average size of 4 KB. This means that each virtual desktop generates at least 40 KB/s of traffic on the storage network. For an environment rated for 1,750 virtual desktops, this means a minimum of approximately 70 MB/sec, which is well within the bounds of modern networks. However, this does not consider other operations. For example, additional bandwidth is needed for:
User network traffic
Virtual desktop migration
Administrative and management operations
The requirements for each of these operations depend on how the environment is used. It is not practical to provide concrete numbers in this context. However, the networks described for the reference architectures in this solution should be sufficient to handle average workloads for these operations.
Regardless of the network traffic requirements, always have at least two physical network connections that are shared by a logical network to that ensure a single link failure does not affect the availability of the system. Design the network so that in the event of a failure, the aggregate bandwidth is sufficient to accommodate the full workload.
The network infrastructure must meet the following minimum requirements:
Redundant network links for the hosts, switches, and storage
Support for link aggregation
Traffic isolation based on industry best practices
Table 14 lists the hardware resources for the network infrastructure validated in this solution.
Table 14. Minimum switching capacity
Storage type Configuration
XtremIO for virtual desktop storage
2 physical switches
2 x FC/FCoE or 2 x 10 GbE ports per VMware vSphere server, for storage network
2 x FC or 2 x 10 GbE ports per SC, for desktop data
Validated network hardware
Chapter 5: Solution Design Considerations and Best Practices
54 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Storage type Configuration
VNX for optional user data storage
2 physical switches
2 x 10 GbE ports per VMware vSphere server
1 x 1 GbE port per Control Station for management
2 x 10 GbE ports per Data Mover for data
Isilon for optional user data storage
2 physical switches
2 x 10 GbE ports per VMware vSphere server
1 x 1 GbE port per node for management
2 x 10 GbE ports per Data for data
Notes:
The solution can use 1 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled.
This configuration assumes that the VSPEX implementation is using rack-mounted servers; for implementations based on blade servers, ensure that similar bandwidth and high availability capabilities are available.
This section provides guidelines for setting up a redundant, highly available network configuration. The guidelines take into account network redundancy, link aggregation, traffic isolation, and jumbo frames.
The configuration examples are for IP-based networks, but similar best practices and design principles apply for the Fibre Channel storage network option.
Network redundancy
The infrastructure network requires redundant network links for each vSphere host, the storage array, the switch interconnect ports, and the switch uplink ports. This configuration provides both redundancy and additional network bandwidth. This configuration is also required regardless of whether the network infrastructure for the solution already exists or is deployed with other solution components.
Network configuration guidelines
Chapter 5: Solution Design Considerations and Best Practices
55 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Figure 12 provides an example of highly available XtremIO FC network topology.
Figure 12. Highly-available XtremIO FC network design example
Chapter 5: Solution Design Considerations and Best Practices
56 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Figure 13 shows a highly available network setup example for user data with a VNX family storage array. The same high-availability principle applies to an Isilon configuration as well. In either case, each node has two links to the switches.
Figure 13. Highly-available VNX Ethernet network design example
Link aggregation
EMC VNX and Isilon provide network high availability or redundancy by using link aggregation. Link aggregation enables multiple active Ethernet connections to appear as a single link with a single MAC address and, potentially, multiple IP addresses.2
In this solution, we configured the Link Aggregation Control Protocol (LACP) on the VNX or Isilon array to combine multiple Ethernet ports into a single virtual device. If a link is lost in the Ethernet port, the link fails over to another port. We distributed all network traffic across the active links.
2 A link aggregation resembles an Ethernet channel but uses the LACP IEEE 802.3ad standard. This standard supports link aggregations with two or more ports. All ports in the aggregation must have the same speed and be full duplex.
Chapter 5: Solution Design Considerations and Best Practices
57 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Traffic isolation
This solution uses virtual local area networks (VLANs) to segregate network traffic of various types to improve throughput, manageability, application separation, high availability, and security.
VLANs segregate network traffic to enable traffic of different types to move over isolated networks. In some cases, physical isolation is required for regulatory or policy compliance reasons, but in most cases logical isolation using VLANs is sufficient.
This solution calls for a minimum of two VLANs:
Client access
Management
Figure 14 shows the design of these VLANs with VNX. An Isilon-based configuration would share the same design principles.
Figure 14. Required networks
The client access network is for users of the system (clients) to communicate with the infrastructure, including the virtual machines and the CIFS shares hosted by the VNX or Isilon array. The management network provides administrators with dedicated access to the management connections on the storage array, network switches, and hosts.
Some best practices call for additional network isolation for cluster traffic, virtualization layer communication, and other features. These additional networks can be implemented, but they are not required.
Chapter 5: Solution Design Considerations and Best Practices
58 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Storage design considerations
XtremIO offers inline de-duplication, inline compression and inline security-at-rest features, and native thin provisioning. Storage planning simply requires that you determine:
Volume size
Number of volumes
Performance requirements
Each volume must be greater than the logical space required by the server. An XtremIO cluster can fulfill the solution’s performance requirements.
vSphere supports more than one method of using storage when hosting virtual machines. We tested the configurations described in Table 15 using FC, and the storage layouts described adhere to all current best practices. If required, a customer or architect with the necessary training and background can make modifications based on their understanding of the system’s usage and load.
Table 15. Storage hardware
Purpose Configuration
XtremIO shared storage
Common:
2 x FC and 2 x 10 GbE interfaces per storage controller
1 x 1 GbE interface per storage controller for management
For 1,750 virtual desktops:
Starter X-Brick configuration with 13 x 400 GB flash drives
For 3,500 virtual desktops:
X-Brick configuration with 25 x 400 GB flash drives
Optional; Isilon shared storage disk capacity
Only required if deploying an Isilon cluster to host user data.
4 x X410 node
2 x 800GB EFD each node
34 x 1TB SATA each node
Optional; VNX shared storage disk capacity
For 1,750 virtual desktops:
2 x 200GB EFD
32 x 2TB NL-SAS
For 3,500 virtual desktops:
4 x 200GB EFD
48 x 2TB NL-SAS
Note: For VNX arrays, EMC recommends configuring at least one hot spare for every 30 drives of a given type. The recommendations in Table 15 include hot spares.
Overview
Validated storage hardware and configuration
Chapter 5: Solution Design Considerations and Best Practices
59 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
This section provides guidelines for setting up the storage layer of the solution to provide high availability and the expected level of performance.
VMware vSphere provides host-level storage virtualization. It virtualizes the physical storage and presents the virtualized storage to the virtual machine.
A virtual machine stores its OS and all other files related to the virtual machine activities in a virtual disk. The virtual disk can be one file or multiple files. VMware uses a virtual SCSI controller to present the virtual disk to the guest OS running inside the virtual machine.
The virtual disk resides in either a VMware Virtual Machine File system (VMFS) datastore or an NFS datastore. An additional option, raw device mapping (RDM), allows the virtual infrastructure to connect a physical device directly to a virtual machine.
Figure 15 shows the various VMware virtual disk types, including:
VMFS—A cluster file system that provides storage virtualization optimized for virtual machines. VMFS can be deployed over any SCSI-based local or network storage.
Raw device mapping—Uses a Fibre Channel or iSCSI protocol and allows a virtual machine to have direct access to a volume on the physical storage.
Figure 15. VMware virtual disk types
High availability and failover
This VSPEX solution provides a highly available virtualized server, network, and storage infrastructure. When implemented in accordance with this guide, it provides the ability to survive single-unit failures with minimal impact to business operations. This section describes the high availability features of the solution.
EMC recommends that you configure high availability in the virtualization layer and automatically allow the hypervisor to restart virtual machines that fail. Figure 16 illustrates the hypervisor layer responding to a failure in the compute layer.
vSphere storage virtualization
Virtualization layer
Chapter 5: Solution Design Considerations and Best Practices
60 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Figure 16. High availability at the virtualization layer
By implementing high availability at the virtualization layer, the infrastructure attempts to keep as many services running as possible, even in the event of a hardware failure.
While the choice of servers to implement in the compute layer is flexible, it is best to use the enterprise class servers designed for data centers. This type of server has redundant power supplies, as shown in Figure 17. You should connect these to separate Power Distribution Units (PDUs) in accordance with your server vendor’s best practices.
Figure 17. Redundant power supplies
We also recommend that you configure high availability in the virtualization layer. This means that you must configure the compute layer with enough resources to ensure that the total number of available resources meets the needs of the environment, even with a server failure. Figure 16 demonstrates this recommendation.
Both Isilon and VNX family storage arrays provide protection against network connection failures at the array. Each vSphere host has multiple connections to user and storage Ethernet networks to guard against link failures, as shown in Figure 18. Spread these connections across multiple Ethernet switches to guard against component failure in the network.
Compute layer
Network layer
Chapter 5: Solution Design Considerations and Best Practices
61 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Figure 18. VNX Ethernet network layer high availability
Having no single points of failure in the network layer ensures that the compute layer will be able to access storage and communicate with users even if a component fails.
XtremIO is designed for five 9s (99.999%) availability by using redundant components throughout the array, as shown in Figure 19 and Figure 20. All of the array components are capable of continued operation in case of hardware failure. XtremIO Data Protection (XDP) delivers the superior protection of RAID 6, while exceeding the performance of RAID 1 and the capacity utilization of RAID 5, ensuring against data loss due to drive failures.
Figure 19. XtremIO series high availability
Storage layer
Chapter 5: Solution Design Considerations and Best Practices
62 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Figure 20. VNX series high availability
EMC storage arrays are designed to be highly available by default. Use the installation guides to ensure that there are no single unit failures that result in data loss or unavailability.
Validation test profile
Table 16 shows the desktop definition and storage configuration parameters that we validated with the environment profile.
Table 16. Validated environment profile
Profile characteristic Value
EMC XtremIO 3.0.2
Hypervisor vSphere 5.5, Update 2
Desktop OS (VDI) OS type Windows 7 Enterprise Edition (32-bit)
Windows 8.1 Enterprise Edition (32-bit)
Server OS(HSD) OS type Windows Server 2012 R2
vCPU per virtual desktop 1
Number of virtual desktops per CPU core 5
RAM per virtual desktop 2 GB
Desktop provisioning method MCS or PVS
Profile characteristics
Chapter 5: Solution Design Considerations and Best Practices
63 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Profile characteristic Value
Average IOPS per virtual desktop at steady state
10 IOPS
Internet Explorer 10
Office 2010
Adobe Reader X1
Adobe Flash Player 11 ActiveX
Doro PDF printer 1.8
Workload generator Login VSI
Workload type officeworker
Number of datastores to store virtual desktops
14 for 1,750 virtual desktops
28 for 3,500 virtual desktops
Number of virtual desktops per datastore 125
Disk and RAID type for XtremIO virtual desktop datastores
400 GB eMLC SSD drives
EMC XtremIO proprietary data protection XDP that delivers RAID 6-like data protection but better than the performance of RAID 10
EMC Data Protection configuration guidelines
Table 17 shows the data protection environment profile that we validated for the solution.
The solution outlines the backup storage (initial and growth) and retention needs of the system. Gather additional information to further size Avamar, including tape-out needs, RPO and RTO specifics, and multisite environment replication needs.
Table 17. Data protection profile characteristics
Profile characteristic Value
User data 17.6 TB for 1,750 virtual desktops
35 TB for 3,500 virtual desktops
Note: 10 GB per desktop
Daily change rate for user data
User data 2%
Retention policy
# Daily 30 daily
# Weekly 4 weekly
# Monthly 1 monthly
Data protection profile characteristics
Chapter 5: Solution Design Considerations and Best Practices
64 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Avamar provides various deployment options depending on the specific use case and the recovery requirements. In this case, the solution is deployed with an Avamar data store. This enables unstructured user data to be backed up directly to the Avamar system for simple file-level recovery. This data protection solution unifies the backup process with the deduplication software and system and achieves the highest levels of performance and efficiency.
Data protection layout
Chapter 6: Reference Documentation
65 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Chapter 6 Reference Documentation
This chapter presents the following topics:
EMC documentation ................................................................................................. 66
Other documentation ............................................................................................... 66
Chapter 6: Reference Documentation
66 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
EMC documentation
The following documents, located on EMC Online Support provide additional and relevant information. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative.
EMC XtremIO Storage Array User Guide
EMC XtremIO Storage Array Operations Guide
EMC XtremIO Storage Array Software Installation and Upgrade Guide
EMC XtremIO Storage Array Hardware Installation and Upgrade Guide
EMC XtremIO Storage Array Security Configuration Guide
EMC XtremIO Storage Array Pre-Installation Checklist
EMC XtremIO Storage Array Site Preparation Guide
EMC VNX5400 Unified Installation Guide
EMC VSI for VMware vSphere: Storage Viewer Product Guide
EMC VSI for VMware vSphere: Unified Storage Management Product Guide
VNX Installation Assistant for File/Unified Worksheet
VNX FAST Cache: A Detailed Review White Paper
Deploying Microsoft Windows 7 Virtual Desktops—Applied Best Practices White Paper
EMC PowerPath/VE for VMware vSphere Installation and Administration Guide
EMC PowerPath Viewer Installation and Administration Guide
EMC VNX Unified Best Practices for Performance—Applied Best Practices White Paper
Other documentation
The following documents, available on the VMware website, provide additional and relevant information:
VMware vSphere Installation and Setup Guide
VMware vSphere Networking
VMware vSphere Resource Management
VMware vSphere Storage Guide
VMware vSphere Virtual Machine Administration
VMware vSphere Virtual Machine Management
VMware vCenter Server and Host Management
Installing and Administering VMware vSphere Update Manager
Preparing the Update Manager Database
Chapter 6: Reference Documentation
67 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Preparing vCenter Server Databases
Understanding Memory Resource Management in VMware vSphere 5.0
The following documents, available on the Citrix website, provide additional and relevant information:
Definitive Guide to XenApp 7.6 and XenDesktop 7.6
Windows 7 Optimization Guide for Desktop Virtualization
Windows 8 and 8.1 Virtual Desktop Optimization Guide
Storage Center system requirements
The following documents, available on the Microsoft website, provide additional and relevant information:
Installing Windows Server 2012 R2
SQL Server Installation (SQL Server 2012)
Chapter 6: Reference Documentation
68 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
Appendix A Customer Sizing Worksheet
This appendix presents the following topic:
Customer Sizing Worksheet for end-user computing ............................................... 69
Appendix A: Customer Sizing Worksheet
69 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO
Design Guide
Customer Sizing Worksheet for end-user computing
Before selecting a reference architecture on which to base a customer solution, use the Customer Sizing Worksheet to gather information about the customer’s business requirements and to calculate the required resources.
Table 18 shows a blank worksheet. To enable you to easily print a copy, a standalone copy of the worksheet is attached to this Design Guide in Microsoft Office Word format.
Table 18. Customer sizing worksheet
User Type vCPUs Memory (GB)
IOPS Equivalent reference virtual desktops
No. of users
Total reference desktops
Resource requirements
--- --- ---
Equivalent reference virtual desktops
Resource requirements
--- --- ---
Equivalent reference virtual desktops
Resource requirements
--- --- ---
Equivalent reference virtual desktops
Resource requirements
--- --- ---
Equivalent reference virtual desktops
Total
Chapter 6: Reference Documentation
70 EMC VSPEX End-User Computing: Citrix XenDesktop 7.6 and VMware vSphere with EMC XtremIO Design Guide
To view and print the worksheet:
1. In Adobe Reader, open the Attachments panel as follows:
Select View > Show/Hide > Navigation Panes > Attachments
or Click the Attachments icon as shown in Figure 21.
Figure 21. Printable customer sizing worksheet
2. Under Attachments, double-click the attached file to open and print the worksheet.