dell compellent storage center - …docshare04.docshare.tips/files/16926/169261717.pdf · dell...
TRANSCRIPT
Dell Compellent Storage Center
XenServer 6.x Best Practices
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 2
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 3
Document revision
Date Revision Description
2/16/2009 1 Initial 5.0 Documentation
5/21/2009 2 Documentation update for 5.5
10/1/2010 3 Document Revised for 5.6 and iSCSI
MPIO
12/21/2010 3.1 Updated iSCSI information
8/22/2011 4.0 Documentation updated for 6.0
11/29/2011 4.1 Update for Software iSCSI information
THIS BEST PRACTICES GUIDE IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN
TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT
EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.
© 2011 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without
the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.
Dell, the DELL logo, the DELL badge, and Compellent are trademarks of Dell Inc. Other trademarks and
trade names may be used in this document to refer to either the entities claiming the marks and names
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 4
or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than
its own.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 5
Contents Document revision ................................................................................................. 3
Contents .................................................................................................................. 5
General syntax ..................................................................................................... 8
Conventions ......................................................................................................... 8
Preface ................................................................................................................... 9
Audience ............................................................................................................ 9
Purpose .............................................................................................................. 9
Customer support .................................................................................................. 9
Introduction ........................................................................................................... 10
XenServer Storage Overview ........................................................................................ 11
XenServer Storage Terminology ............................................................................... 11
Shared iSCSI Storage............................................................................................. 11
Shared Fibre Channel Storage ................................................................................. 12
Shared NFS ........................................................................................................ 13
Volume to Virtual Machine Mapping .................................................................... 14
NIC Bonding vs. iSCSI MPIO ................................................................................ 14
Multi-Pathing ..................................................................................................... 15
Enable Multi-pathing in XenCenter...................................................................... 15
Software iSCSI ......................................................................................................... 16
Overview .......................................................................................................... 16
Open iSCSI initiator Setup with Dell Compellent ........................................................... 17
Multipath with Dual Subnets ................................................................................... 17
Configuring Dedicated Storage NIC ...................................................................... 18
To Assign NIC Functions using the XE CLI ............................................................... 18
XenServer Software iSCSI Setup.......................................................................... 19
Login to Compellent Control Ports ...................................................................... 19
Configure Server Objects in Enterprise Manager ..................................................... 20
View Multipath Status ..................................................................................... 22
Multi-path Requirements with Single Subnet ............................................................... 23
Configuring Bonded Interface ............................................................................ 23
Configuring Dedicated Storage Network ............................................................... 24
To assign NIC functions using the XE CLI: .............................................................. 25
XenServer Software iSCSI Setup.......................................................................... 25
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 6
Configure Server Objects in Enterprise Manager ..................................................... 27
Multi-path Requirements with Dual Subnets, Legacy Port Mode ........................................ 28
Log in to Dell Compellent iSCSI Target Ports .......................................................... 30
View Multipath Status ..................................................................................... 33
iSCSI SR Using iSCSI HBA ........................................................................................ 33
Fibre Channel ......................................................................................................... 38
Overview .......................................................................................................... 38
Adding a FC LUN to XenServer Pool .......................................................................... 38
Data Instant Replay to Recover Virtual Machines or Data ................................................ 40
Overview ..................................................................................................... 40
Recovery Option 1 – One VM per LUN ........................................................................ 40
Recovery Option 2 – Recovery Server ........................................................................ 50
Dynamic Capacity ..................................................................................................... 62
Dynamic Capacity Overview .............................................................................. 62
Dynamic Capacity with XenServer ....................................................................... 62
Data Progression ...................................................................................................... 63
Data Progression on XenServer ........................................................................... 63
Boot from SAN ......................................................................................................... 64
VM Metadata Backup and Recovery ............................................................................... 65
Backing Up VM MetaData ....................................................................................... 65
Importing VM MetaData ......................................................................................... 67
Disaster Recovery ..................................................................................................... 68
Replication Overview ........................................................................................... 68
Test XenServer Disaster Recovery ....................................................................... 70
Recovering from a Disaster .................................................................................... 73
Replication Based Disaster Recovery ......................................................................... 77
Disaster Recovery Replication Example ................................................................ 77
Live Volume ....................................................................................................... 82
Overview .......................................................................................................... 82
Appendix 1 Troubleshooting ........................................................................................ 84
XenServer Pool FC Mapping Issue ............................................................................. 84
Starting Software iSCSI ......................................................................................... 85
Two ways to Start iSCSI ................................................................................... 86
Software iSCSI Fails to Start as Server Boot ................................................................. 86
Wildcard Doesn’t Return All Volumes ........................................................................ 86
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 7
View Multipath Status ........................................................................................... 87
XenCenter GUI displays Multipathing Incorrectly .......................................................... 87
Connectivity issues with a Fibre Channel Storage Repository ........................................... 88
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 8
General syntax Figure 1, Document Syntax
Item Convention
Menu items, dialog box titles, field names, keys Bold
Mouse click required Click:
User Input Monospace Font
User typing required Type:
Website addresses http://www.compellent.com
Email addresses [email protected]
Conventions
Notes are used to convey special information or instructions.
Timesavers are tips specifically designed to save time or reduce the number of steps.
Caution indicates the potential for risk including system or data damage.
Warning indicates that failure to follow directions could result in bodily harm.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 9
Preface
Audience The audience for this document is System Administrators who are responsible for the setup and
maintenance of Citrix XenServer and associated storage. Readers should have a working knowledge of
the installation and management of Citrix XenServer and the Dell Compellent Storage Center.
Purpose This document provides best practices for the setup, configuration and management of Citrix XenServer
with Dell Compellent Storage Center. This document is highly technical and intended for storage and
server administrators as well as information technology professionals interested in learning more about
how Citrix XenServer integrates with Compellent Storage Center.
Customer support Dell Compellent provides live support 1-866-EZSTORE (866.397.8673), 24 hours a day, 7 days a week,
365 days a year. For additional support, email Dell Compellent at [email protected]. Dell
Compellent responds to emails during normal business hours.
Additional information on XenServer 6.0 can be found in the Citrix XenServer 6.0 Administration Guide
located on the Citrix download site. Information on Dell Compellent Storage Center is located on the
Dell Compellent Knowledge Center.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 10
Introduction This document will provide configuration examples, tips, recommended settings, and other storage
guidelines a user can follow while integrating Citrix XenServer with the Dell Compellent Storage
Center. This document has been written to answer many frequently asked questions with regard to
how XenServer interacts with the Dell Compellent Storage Center's various features such as Dynamic
Capacity, Data Progression, Replays, and Remote Instant Replay. This document focuses on XenServer
6.0, however most of the concepts apply to XenServer 5.X unless otherwise noted.
Dell Compellent advises customers to read XenServer documentation which are publically available on
the Citrix XenServer knowledge base documentation pages to provide additional information on
installation and configuration.
This document assumes the reader has had formal training or has advanced working knowledge of the
following:
• Installation and configuration of Citrix XenServer
• Configuration and operation of the Dell Compellent Storage Center
• Operating systems such as Windows or Linux
• The Citrix XenServer 6.0 Administrators Guide NOTE: the information contained within this document is based on general circumstances and
environments. Actual configuration may vary in different environments.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 11
XenServer Storage Overview
XenServer Storage Terminology In working with XenServer 6.0, there are four object classes that are used to describe, configure, and manage storage:
• Storage Repositories (SRs) are storage targets containing homogeneous virtual disks (VDIs). SR commands provide operations for creating, destroying, resizing, cloning, connecting and discovering the individual VDIs that they contain. A storage repository is a persistent, on-disk data structure. So the act of "creating" a new SR is similar to that of formatting a disk -- for single LUN-based SR types, i.e. LVM over iSCSI or Fibre Channel, the creation of a new SR involves erasing any existing data on the specified LUN. SRs are long-lived, and may in some cases be shared among XenServer Hosts, or moved between them. The interface to storage hardware allows VDIs to be supported on a large number of SR types. With built-in support for IDE, SATA, SCSI and SAS drives locally connected, and iSCSI and Fibre Channel remotely connected, the XenServer host SR is very flexible. Each XenServer host can access multiple SRs in parallel of any type. When hosting direct attached shared Storage Repositories on a Dell Compellent Storage Center, there are 2 options; an iSCSI connected LUN or a Fibre Channel connected LUN.
• Physical Block Devices (PBDs) represent the interface between a physical server and an attached SR. PBDs are connector objects that allow a given SR to be mapped to a XenServer Host. PBDs store the device configuration fields that are used to connect to and interact with a given storage target. PBD objects manage the run-time attachment of a given SR to a given XenServer Host.
• Virtual Disk Images (VDIs) are an on-disk representation of a virtual disk provided to a VM. VDIs are the fundamental unit of virtualized storage in XenServer. Similar to SRs, VDIs are persistent, on-disk objects that exist independently of XenServer Hosts.
• Virtual Block Devices (VBDs) are a connector object (similar to the PBD described above) that allows mappings between VDIs and Virtual Machines (VMs). In addition to providing a mechanism to attach (or plug) a VDI into a VM, VBDs allow fine-tuning of parameters regarding QoS (quality of service), statistics, and the boot ability of a given VDI.
Shared iSCSI Storage Citrix XenServer on Dell Compellent Storage provides support for shared SRs on iSCSI attached LUNs.
iSCSI is supported using the open-iSCSI software initiator or a supported iSCSI Host Bus Adapter (HBA).
Shared iSCSI support is implemented based on a Logical Volume Manager (LVM). LVM-based storage is
high-performance and allows virtual disks to be dynamically resized. Virtual disks are fully allocated as
an isolated volume on the underlying physical disk and so there is a minimum of storage virtualization
overhead imposed. As such, this is a good option for high-performance storage.
Below is a diagrammatic representation of using shared storage with iSCSI HBAs in XenServer. The
second diagram illustrates shared storage with the open iSCSI initiator.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 12
Figure 2, Shared iSCSI Storage with iSCSI HBA
Figure 3, Shared iSCSI with Software Initiator
Shared Fibre Channel Storage XenServer hosts with Dell Compellent Storage supports Fibre Channel SANs using the Emulex or QLogic
host bus adapters (HBAs). Logical unit numbers (LUNs) are mapped to the XenServer host as disk
devices. Like HBA iSCSI storage, Fibre Channel storage support is implemented based on the same
Logical Volume Manager with the same benefits as iSCSI storage, just utilizing a different data I/O
path.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 13
Figure 4, Shard Fibre Channel Storage
Shared NFS XenServer supports NFS file servers, such as the Dell NX3000 with Dell Compellent storage to host SRs.
NFS storage repositories can be shared within a resource pool of XenServers. This allows virtual
machines to be migrated between XenServers within the pool using XenMotion.
Attaching an NFS storage repository requires the hostname or IP address of the NFS server. The NFS
server must be configure to export the specified path to all XenServers in a pool or the reading of the
SR will fail.
Using and NFS share is a relatively simple way to create an SR and doesn’t involve the complexity of
iSCSI or expense of Fibre Channel. There are some limitations that must be considered before
implementing NFS however. An NFS SR will utilize a similar network infrastructure as iSCSI to support
redundant paths to the NFS share. The main difference is that iSCSI uses MPIO to support multipathing
and load balancing between multiple the paths while NFS is limited to one network interface per SR.
Redundancy in an NFS environment can be accomplished by using XenServer bonded interfaces.
Bonded interfaces are active/passive and won’t provide load balancing across both physical adapters
such as iSCSI can provide.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 14
Figure 5, Shared NFS SR
A new feature with XenServer 6.0 is the ability to provide a high availability (HA) quorum disk on an
NFS volume. However, the XenServer 6.0 Disaster Recovery feature can only be enabled when using
LVM over HBA or software iSCSI. The underlying protocol choice for SRs is a business decision that will
be unique to each environment. Given the performance benefits and the requirement for Disaster
Recovery it is the recommendation of Dell Compellent to use iSCSI or FC HBA, or software iSCSI over
NFS.
Volume to Virtual Machine Mapping
XenServer is fully capable of deploying a many-to-one VM-to-volume (LUN) deployment. The number of
VM on a volume is dependent on the workload and IOPS requirement of the VM. When multiple virtual
disks share a volume they also share the disk queue for that volume on the host. For this reason, care
should be taken to prevent a bottleneck condition on the volume. Additionally, replication and DR
become a factor when hosting multiple VMs on a volume. This is due to replication and recovery taking
place on a per-volume basis.
NIC Bonding vs. iSCSI MPIO
NIC bonds can improve XenServer host resiliency by using two physical NICs as if they were one. If one
NIC within the bond fails the host's network traffic will automatically be routed over the second NIC.
NIC bonds supports active/active mode, but only supports load-balancing of VM traffic across the
physical NICS. Any given virtual network interface will only use one of the links in the bond at a time.
Load-balancing is not available for non-VM traffic.
MPIO also provides host resiliency by using two physical NICs. MPIO uses round robin to balance the
Storage traffic between separate targets on the Dell Compellent Storage Center. By spreading the load
between multiple Dell Compellent Target iSCSI bottlenecks can be avoided while providing network
adapter, subnet, and switch redundancy.
If all Front End iSCSI ports on the Dell Compellent System are on the same subnet, then NIC bonding is
the better option since XenServer iSCSI MPIO requires at least two separate subnets. In this
configuration all iSCSI connections will use the same physical NIC because Bonding does not support
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 15
active/active connections for anything but VM traffic. For this reason, it is recommended that front
end iSCSI ports across be configured two subnets. This allows load balancing across all NICs and
failover with MPIO.
Multi-Pathing Multi-Pathing allows for failures in HBAs, Switch Ports, Switches, and SAN IO ports. It is recommended
to utilize Multi-Pathing to increase availability and redundancy for critical systems such as production
deployments of XenServer when hosting critical servers.
XenServer supports Active/Active Multi-Pathing for iSCSI and FC protocols for I/O datapaths. Dynamic
Multi-Pathing uses a round-robin mode load balancing algorithm, so both routes will have active traffic
on them during normal operations. Multi-Pathing can be enabled via XenCenter or on the command
line. Please see the XenServer 6.0 Administrator Guide for information on enabling Multi-Pathing on
XenServer hosts. Enabling Multi-Pathing requires a server restart and should be enabled before storage
is added to the server. Only use Multi-Pathing when there are multiple paths to the storage center.
Enable Multi-pathing in XenCenter
1. Right click on the server in XenCenter and select “Enter Maintenance Mode”
2. Right click on the server and select “Properties”
3. In the Properties window, select “Multipathing”
4. Check the “Enable Multipathing on this server” box and click OK
5. The server will need to be restarted for Multipathing to take affect
Figure 6, Enable Multipathing
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 16
Software iSCSI
Overview XenServer Supports shared Storage Repositories (SRs) on iSCSI LUNs. iSCSI is implemented using the
open-iSCSI software initiator or by using a supported iSCSI HBAs. XenServer iSCSI Storage Repositories
are supported with Dell Compellent Storage Center running in either Legacy mode or Virtual Port
mode.
Shared iSCSI using the software iSCSI initiator is implemented based on the Linux Volume Manager
(LVM) and provides the same performance benefits provided by LVM on local disks. Shared iSCSI SRs
using the software-based host initiator are capable of supporting VM agility. Using XenMotion, VMs can
be started on any XenServer host in a resource pool and migrated between them with no noticeable
interruption.
iSCSI SRs utilize the entire LUN specified at creation time and may not span more than one LUN. CHAP
support is provided for client authentication, during both the data path initialization and the LUN
discovery phases.
NOTE: Use dedicated network adapters for iSCSI traffic. The default connection can be used however
it is always best practice to separate iSCSI and network traffic.
All iSCSI initiators and targets must have a unique name to ensure they can be identified on the
network. An initiator has an iSCSI initiator address, and a target has an iSCSI target address.
Collectively these are called iSCSI Qualified Names, or IQNs.
XenServer hosts support a single iSCSI initiator which is automatically created and configured with a
random IQN during host installation. iSCSI targets commonly provide access control via iSCSI initiator
IQN lists, so all iSCSI targets/LUNs to be accessed by a XenServer host must be configured to allow
access by the host's initiator IQN. Similarly, targets/LUNs to be used as shared iSCSI SRs must be
configured to allow access by all host IQNs in the resource pool.
iSCSI targets that do not provide access control will typically default to restricting LUN access to a
single initiator to ensure data integrity. If an iSCSI LUN is intended for use as a shared SR across
multiple XenServer hosts in a resource pool ensure that multi-initiator access is enabled for the
specified LUN.
It is strongly suggested to change the default XenServer IQN to one that is consistent with a naming
schema in the iSCSI environment. The XenServer host IQN value can be adjusted using XenCenter, or
via the CLI with the following command when using the iSCSI software initiator:
xe host-param-set uuid=<valid_host_id> other-config:iscsi_iqn=<new_initiator_iqn>
Caution: It is imperative that every iSCSI target and initiator have a unique IQN. If a non-unique IQN
identifier is used, data corruption and/or denial of LUN access can occur.
Caution: Do not change the XenServer host IQN with iSCSI SRs attached. Doing so can result in failures
connecting to new targets or existing SRs.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 17
Open iSCSI initiator Setup with Dell Compellent
Caution: Issues have been identified with the Citrix implementation of multipathing and Storage
Center in virtual port mode. It is strongly recommended to use iSCSI HBAs when implementing
XenServer with Storage Center in virtual port mode.
When planning iSCSI it is important that networks used for software-based iSCSI have separate
switching and different subnets from those used for management. The use of separate subnets ensures
that management and storage traffic flows over the intended interface and avoids complex
workarounds that may compromise reliability or performance.
If planning to utilize iSCSI storage with Multi-Pathing, it is important to ensure that none of the
redundant paths reported by iSCSI are within the same subnet as the management interface. If this
occurs the iSCSI initiator may not be able to successfully establish a session over each path because the
management interface comes up separate to the storage interface(s).
There are three options when implementing the XenServer software iSCSI initiator to connect to Dell
Compellent storage. They are:
• Multipath with dual subnets, virtual port mode - In this configuration the Storage Center is set
to Virtual Port mode and the front end controller ports are on two separate subnets. This
option uses MPIO for multipathing. This is the recommended option when HA is required.
• Multipath with single subnet - In this configuration the Storage Center is set to Virtual Port
mode and all controller front end ports are on the same subnet. This option uses NIC Bonding
for path failover. This is also an option when the servers have a single iSCSI Storage NIC and HA
is not required.
• Multipath with dual subnets, Legacy port mode - This is the option for HA when the Storage
Center is set to Legacy Port mode.
Multipath with Dual Subnets The requirements for software iSCSI Multi-pathing with dual subnets and Compellent Storage Center in
virtual port mode are as follows:
• XenServer 6.0
• iSCSI using 2 unique dedicated storage NICs/subnets
o Citrix best practices states that these 2 subnets should be different from the XenServer
management network.
• Multi-pathing enabled on all XenServer pool hosts
• iSCSI Target IP addresses for the Storage Center Front End Control ports
o In the example below the iSCSI FE Control ports on Storage Center Controller are
assigned IP address 10.25.0.10/16 and 10.26.0.10/16
In this configuration the Storage Center is set to virtual port mode and the iSCSI Front End ports are
on two separate subnets different from the management interface. The Storage Center is
configured with two control ports, one for each subnet. Multipathing is controlled through MPIO.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 18
Figure 7, Dual Subnet, MPIO
Configuring Dedicated Storage NIC
XenServer allows use of either XenCenter or the XE CLI to configure and dedicate a NIC to specific
functions, such as storage traffic.
Assigning a NIC to a specific function will prevent the use of the NIC for other functions such as host
management, but requires that the appropriate network configuration be in place to ensure the NIC is
used for the desired traffic. For example, to dedicate a NIC to storage traffic the NIC, storage target,
switch, and/or VLAN must be configured so the target is only accessible over the assigned NIC.
Ensure that the dedicated storage interface uses a separate IP subnet which is not routable from the
main management interface. If this is not enforced, storage traffic may be directed over the main
management interface after a host reboot due to the order in which network interfaces are initialized.
To Assign NIC Functions using the XE CLI
1. Ensure that the Physical Interface (PIF) is on a separate subnet, or routing is configured to suit your
network topology in order to force the desired traffic over the selected PIF.
2. Get the PIF UUID for the interface
2.1. If on a stand-alone server, use xe pif-list to list the PIFs on the server
2.2. If on a host in a resource pool, first type xe host-list to retrieve a list of the hosts and
UUIDs
2.3. Use the command xe pif-list host-uuid=<host-uuid> to list the host PIFs
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 19
3. Setup an IP configuration for the PIF, adding appropriate values for the mode parameter and if
using static IP addressing the IP, netmask, gateway, and DNS parameters:
xe pif-reconfigure-ip mode=<DHCP | Static> uuid=<pif-u uid>
Example: xe pif-reconfigure-ip mode=static ip=10.0.0.10
netmask=255.255.255.0 gateway=10.10.0.1 uuid=<PIF-U UID>
4. Set the PIF's disallow-unplug parameter to true:
xe pif-param-set disallow-unplug=true uuid=<PIF-UUI D>
5. Set the Management Purpose of the interface:
xe pif-param-set other-config:management_purpose="S torage" uuid=<PIF-UUID>
6. Repeat this process for each eth interface in the XenServer host that will be dedicated for storage
traffic. For iSCSI MPIO configurations this should be a minimum of two eth interfaces that are on
separate subnets.
For more information on this topic see the Citrix XenServer 6.0 Administrator Guide.
XenServer Software iSCSI Setup
A server object on the Dell Compellent Storage Center can be created once the XenServer has been
configured for iSCSI traffic.
NOTE: Best practice recommendation is to change the XenServer IQN from the randomly assigned IQN
to one that identifies the system on the iSCSI network. The IQN must be unique to avoid data
corruption or loss.
Gather Dell Compellent iSCSI Target Info
Within Storage Center Manager, go to Controllers, IO Cards, iSCSI and note the IP address of the two
control ports. These should be on the same IP subnet as the server’s storage NICs.
Figure 8, Control Port IP Addresses
In this example the IP addresses are:
10.25.0.10/16
10.26.0.10/16
Login to Compellent Control Ports
In this step the iscsiadm command will be utilized in the XenServer CLI to discover and login to all the
Compellent iSCSI targets.
1. From the XenServer console run the following command for each iSCSI control ports.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 20
iscsiadm -m discovery --type sendtarget --portal <C ontrol Port IP:3260>
Example: iscsiadm –m discovery --type stendtarget --portal 1 0.25.0.10:3260
Figure 9, Discover Storage Center Ports
NOTE: If problems are encountered while running the iscsiadm commands, see the iSCSI
Troubleshooting section at the end of this document.
2. Repeat the discovery process for each Dell Compellent Control Port.
3. Once all target ports are discovered run iscsiadm with the Login parameter:
iscsiadm –m node --login
Figure 10, Log into Storage Center Ports
The server objects can be configured in the Storage Center now that the server has logged in.
Configure Server Objects in Enterprise Manager
Follow the steps below to configure the server object for access to the Storage Center
1. In Enterprise Manager, go to Storage Center and select Storage Management
2. In the object tree, right click on Servers and select Create Server
3. Complete all options as specific in the Compellent Administrators guide
4. Uncheck the “Use iSCSI Name” box
5. Select both connections listed under WWName and click OK to finish
NOTE: Unchecking the “Use iSCSI Name” box will aid in identifying the status of MPIO paths.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 21
Figure 11, Create Server, Enterprise Manager
NOTE: Starting in Storage Center version 5.5.x, the steps listed above must be completed using
Enterprise Manager. It is not possible to create server objects with the “Use iSCSI Names” box
unchecked when connected directly to the Storage Center.
After creating the server object the volumes can be created and mapped to the server. In a server
pool, map the LUN to all servers specifying the same LUN number. See the Dell Compellent
documentation for detailed instructions on creating and mapping volumes.
NOTE: Use Server Cluster objects to map volumes to multiple servers in a resource pool.
Once the volumes are mapped to the server they can be added to the XenServer using XenCenter or the
CLI. Below are the steps for adding storage using XenCenter The steps for adding storage through the
CLI can be found in the XenServer 6.0 Administrator’s Guide.
1. Select the server or pool in XenCenter and click on New Storage
2. Select the Software iSCSI option under virtual disk storage, click next
Figure 12, Add iSCSI Disk
3. Give the new Storage Repository a name and click next
4. Enter one of the Dell Compellent control ports in the “Target Host” field, click Discover IQNs
5. Click Discover LUNs
6. Select the LUN to add under “Target LUN” and click finish
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 22
Figure 13, Add iSCSI SR
NOTE: When the Storage Center is in virtual port mode and adding storage with the wildcard option, an
incomplete list of volumes mapped to the server may be returned. This is a know issue with the
XenCenter GUI. To work around the issue, cycle through the Control Ports in the Target Host field
using the (*) wildcard Target IQNs until the Target LUN appears. This is a GUI issue and will not affect
multipathing.
The SR should now be available to the server. Repeat the steps for mapping and adding storage for any
additional SRs.
View Multipath Status
To view the status of the multipath use the following command:
mpathutil status
Figure 14, Multipath Status
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 23
Multi-path Requirements with Single Subnet The process for configuring multi-pathing in a single subnet environment is similar to that of a dual
subnet environment. The key difference is that redundancy is handled by the bonded network
adapters. The requirements for software iSCSI multi-pathing with the Compellent Storage Center in a
single subnet are as follows:
• XenServer 6.0
• iSCSI using 2 bonded NICs
o Citrix best practices states that these 2 NICs should bonded through the XenCenter GUI.
• iSCSI Target IP addresses for the Storage Center Front End Control ports
o In this example the IP address for the Control port will be 10.35.0.10
• 1 Network Storage Interfaces on XenServer on the bonded interface.
Figure 15, Single Subnet
Configuring Bonded Interface
In this configuration redundancy to the network is provided by two bonded NICs. Bonding the two NICs
will create a new bonded interface that network interfaces will be associated with. This will create
multiple paths with one storage IP address on the server.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 24
NOTE: The process of configuring a single-path, non redundant connection to a Dell Compellent Storage
Center is the same except for excluding the steps to bond the two NICs.
NOTE: Create NIC bonds as part of the initial resource pool creation, prior to joining additional hosts to
the pool. This will allow the bond configuration to be replicated to new hosts as they join the pool.
The steps below outline the process of creating a NIC bond in XenServer 6.0
1. Go into Citrix XenCenter, select the server and go to the NIC tab.
2. At the bottom of the NIC window is the option to create a bond. Select the NICs you would like
to bond and click create.
Figure 16, Add Bonded Interface
3. Once complete, there will be a new bonded NIC displayed in the list of NICs.
Figure 17, Bonded Interface
Configuring Dedicated Storage Network
XenServer allows use of either XenCenter or the XE CLI to configure and dedicate a network to specific
functions, such as storage traffic. The steps below outline the process of creating a dedicated storage
network interface through the CLI.
Assigning a network to storage will prevent the use of the network for other functions such as host
management, but requires that the appropriate configuration be in place in order to ensure the
network is used for the desired traffic. For example, to dedicate a network to storage traffic the NIC,
storage target, switch, and/or VLAN must be configured such that the target is only accessible over the
assigned NIC. This allows use of standard IP routing to control how traffic is routed between multiple
NICs within a XenServer.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 25
Before dedicating a network interface as a storage interface for use with iSCSI SRs, ensure that the
dedicated interface uses a separate IP subnet which is not routable from the main management
interface. If this is not enforced, then storage traffic may be directed over the main management
interface after a host reboot, due to the order in which network interfaces are initialized.
To assign NIC functions using the XE CLI:
1. Ensure that the Bond PIF is on a separate subnet, or routing is configured to force the desired
traffic over the selected PIF.
2. Get the PIF UUID for the Bond interface
2.1. If on a stand-alone server, use xe pif-list to list the PIFs on the server 2.2. If on a host in a resource pool, first type xe host-list to retrieve a list of the hosts and UUIDs 2.3. Use the command xe pif-list host-uuid=<host-uuid > to list the host PIFs
3. Setup an IP configuration for the PIF identified in the previous step, adding appropriate values for
the mode parameter and if using static IP addressing:
3.1. xe pif-reconfigure-ip mode=<DHCP | Static > uuid=<pif-uuid >
Example: xe pif-reconfigure-ip mode=static ip=10.0.0.10 netmask=255.255.255.0 gateway=10.10.0.1
uuid=<3f5a072f-ea3b-de28-aeab-47c7d7f2b58f >
4. Set the PIF's disallow-unplug parameter to true:
4.1. xe pif-param-set disallow-unplug=true uuid=<PIF-UUID>
5. Set the Management Purpose of the interface
5.1. xe pif-param-set other-config:management_purpose="Storage" uuid=<PIF-UUID>
For more information on this topic see the Citrix XenServer 6.0 Administrator Guide.
XenServer Software iSCSI Setup
Once the XenServer has been configured for iSCSI traffic a server object on the Dell Compellent Storage
Center can be created.
NOTE: Best practice is to change the XenServer IQN from the randomly assigned IQN to one that
identifies the system on the iSCSI network. The IQN must be unique to avoid data corruption or loss.
1. To gather the Storage Center iSCSI target Info from Storage Center go to Controllers, IO Cards,
iSCSI and note the IP address of the control port. It should be on the same IP subnet as the server’s
storage NICs.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 26
Figure 18, Control Port IP address
In this example the IP address is:
10.35.0.10/16
2. Login to Compellent Control Ports. In this step the iscsiadm command will be utilized in the
XenServer CLI to discover and login to all the Dell Compellent iSCSI targets.
3. From the XenServer console, run the following command for the iSCSI control port.
iscsiadm -m discovery --type sendtarget --portal <C ontrol Port IP:3260>
Example: iscsiadm –m discovery --type sendtarget --portal 10 .25.0.10:3260
Figure 19, Discover Storage Center Ports
NOTE: If problems are encountered while running the iscsiadm commands, see the iSCSI
troubleshooting section at the end of this document.
4. Once all target ports are discovered, run iscsiadm with the Login parameter:
iscsiadm –m node --login Figure 20, log into Storage Center Ports
5. Now that the server has logged in the server objects can be configured in the Storage Center.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 27
Configure Server Objects in Enterprise Manager
Follow the steps below to configure the server object for access to the Storage Center
1. In Enterprise Manager, go to Storage Center Manager and select Storage Management In the object
tree.
2. Right click on Servers and select Create Server. Complete all options as specific in the Compellent
Administrators Guide including server name and operating system.
3. Select the server IQN listed under WWName and click OK to finish
Figure 21, Create Server in Enterprise Manager
After creating the server object the volumes can be created and mapped to the server. In a server
pool, be sure the LUNS are mapped to the servers with the same LUN number. See the Dell Compellent
Admin Guide for detailed instructions on creating and mapping volumes.
NOTE: Use Server Cluster objects to map volumes to multiple servers in a resource pool.
Once the volumes are mapped to the server they can be added to the XenServer using XenCenter or the
CLI. Below are the steps for adding storage using XenCenter. Steps for adding storage through the CLI
can be found in the XenServer 6.0 Administrator’s Guide.
1. Select the server or pool in XenCenter and click on New Storage
2. Select the Software iSCSI option under virtual disk storage, click next
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 28
Figure 22, Add iSCSI Disk
3. Give the new Storage Repository a name and click next
4. Enter the Dell Compellent control port in the “Target Host” field, click Discover IQNs
5. Click Discover LUNs to view the available LUNs.
Figure 23, Add iSCSI SR
6. Select the LUN to add under “Target LUN” and click finish
NOTE: When the Storage Center is in virtual port mode and storage is added with the wildcard option,
an incomplete list of volumes mapped to the server may be returned. This is a know issue with the
XenCenter GUI. To work around the problem, cycle through the Target Host IP addresses using the (*)
wildcard IQN until the Target LUN appears. This is a GUI issue and will not affect multipathing.
The SR will now be available to the server. Repeat the steps for mapping and adding storage for any
additional SRs.
Multi-path Requirements with Dual Subnets, Legacy Port Mode Dell Compellent Legacy Port Mode uses the concept of Fault Domains to provide redundant paths to the
Storage Center. To ensure redundancy, a fault domain consists of a primary port on one controller and
a failover port on the second controller. The two ports are linked in the same domain by the identical
Fault Domain number. This provides redundancy with the requirement that half the Front End ports
will only be utilized in the event of a failover. The requirements for software iSCSI Multi-pathing with
the Compellent Storage Center Legacy Port Mode are as follows:
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 29
• XenServer 6.0
• iSCSI using 2 unique dedicated storage NICs/subnets
o Citrix best practices states that these 2 subnets should be different from the XenServer
management network.
• Multi-pathing enabled on all XenServer Pool Hosts
• iSCSI Target IP addresses for the Storage Center Front End ports
o In this example the primary iSCSI Front End ports IP address are 10.10.63.2, 10.10.62.1,
172.31.37.134, 172.31.37.131
In this configuration the Storage Center is set to Legacy Port mode and the iSCSI Front End ports
are on two subnets separate from each other and the management interface. Multipathing is
controlled through MPIO.
Figure 24, Legacy Port Mode
The first step to configure XenServer for Dell Compellent in Legacy Port mode is to identify the primary
iSCSI target IP addresses on each controller the Storage Center. This can be done by going to the
controllers listed in Storage Center, expanding IO cards, iSCSI and clicking on each iSCSI port listed.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 30
Figure 25, Legacy Port IP addresses
Log in to Dell Compellent iSCSI Target Ports
This step uses the iscsiadm command in the XenServer CLI to discover and login to all the Compellent
iSCSI targets.
1. For each of the Target IP addresses enter the following command:
iscsiadm -m discovery --type sendtarget --portal <c ontrol port ip:3260>
Example: iscsiadm -m discovery --type sendtarget --portal 10 .10.62.1:3260
Figure 26, Discover Storage Center Ports
2. Repeat the discovery process for each Target Port
3. Once all the ports are discovered, run the iscsiadm command with Login parameter to connect the
host to the Storage Center
Iscsiadm -m node --login
Figure 27, log into Storage Center Ports
Configure Server Objects in Enterprise Manager
Follow the steps below to configure the server object for access to the Storage Center
1. In Enterprise Manager, go to Storage Center and select Storage Management
2. In the object tree, right click on Servers and select Create Server
3. Complete all options as specified in the Dell Compellent Administrators Guide.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 31
Figure 28, Create Server in Enterprise Manager
After creating the server object the volumes can be created and mapped to the server. See the Dell
Compellent documentation for detailed instructions on creating and mapping volumes.
NOTE: Use Server Cluster objects to map volumes to multiple servers in a resource pool.
Once the volumes are mapped to the server they can be added to the XenServer using XenCenter or the
CLI. Below are the steps for adding storage using XenCenter. Steps for adding storage through the CLI
can be found in the XenServer 6.0 Administrator’s Guide.
1. Select the server or Pool in XenCenter and click on New Storage
2. Select the Software iSCSI option under virtual disk storage, click next
Figure 29, Add iSCSI Disk
3. Give the new Storage Repository a name and click Next
4. Enter the Dell Compellent control ports in the “Target Host” filed, click Discover IQNs
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 32
Figure 30, Discover Storage Center LUNs
5. Click Discover LUNs
Figure 31, Add iSCSI SR
6. Select the LUN to add under “Target LUN” and click finish
NOTE: When Storage Center is in legacy port mode adding storage may return an incomplete list of
volumes mapped to the server. This is a know issue with the XenCenter GUI where only the LUNs
active on the first IP address in Target Host are returned. To work around this issue, cycle through the
Target Hosts IP using the (*) wildcard Target IQN until the Target LUN appears. This is a GUI issue and
will not affect multipathing.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 33
The SR will now be available to the server. Repeat the steps above for mapping and adding storage for
any additional SRs.
View Multipath Status
To view the status of the multipath use the following command:
mpathutil status
Figure 32, Multipath Status
iSCSI SR Using iSCSI HBA If using an iSCSI HBA to create the iSCSI SR, either the CLI from the control domain needs to be used, or
the BIOS level management interface needs to be updated for target information. Depending on what
HBA is being used; the initiator IQN for the HBA needs to be configured. Given the type of HBA used,
the documentation for that HBA should be consulted to configure the IQN. Once the IQN has been
configured for the HBA, use the Storage Center GUI to create a new LUN. However, instead of using the
XenServer‘s IQN, specify the IQN of the various ports of the HBA. Do this for every XenServer host in
the pool. Qlogic’s HBA CLI in included in the XenServer host and located at:
Qlogic: /opt/QLogic_Corporation/SANsurferiCLI/iscli
If using Emulex iSCSI HBAs, consult the Emulex documentation for instructions on installing and
configuring the HBA.
For the purposes of an example, this guide illustrates how the QLogic iSCSI HBA CLI iscli can be used to
configure an IP addresses on a dual port QLE4062C iSCSI HBA Adapter, add the iSCSI server to the
Compellent SAN, and configure a LUN for the server. This setup will also utilize Multi-Pathing since
there are two iSCSI HBA ports.
1. From the XenServer Console launch the SanSurfer iscli. 1.1. From XenServer Command Prompt type in: /opt/QLogic_Corporation/SANsurferiCLI/iscli
NOTE: This configuration can also be performed during the Server boot by entering Ctrl – Q when
prompted.
Dell Compellent Storage Center XenServer 6.x Best Practices
Figure 33, iSCLI Menu
2. Configure IP Address for the iSCSI HBA
2.1. In order to set the IP address for the HBAthen option 2 (Port Network Settings Menu).
2.2. Enter option 4 (Select HBA Port) to select the appropriate HBA port then select
(Configure IP Settings).
Figure 34, Configure HBA IP Address
2.3. Enter the appropriate IP Settings for the HBA adapter port when finished exit and save or
select another HBA port to configure.
Center XenServer 6.x Best Practices
ess for the iSCSI HBA
set the IP address for the HBA choose option 4 (Port Level Info & Operations)
(Port Network Settings Menu).
(Select HBA Port) to select the appropriate HBA port then select
Enter the appropriate IP Settings for the HBA adapter port when finished exit and save or
select another HBA port to configure.
Page 34
(Port Level Info & Operations), and
(Select HBA Port) to select the appropriate HBA port then select option 2
Enter the appropriate IP Settings for the HBA adapter port when finished exit and save or
Dell Compellent Storage Center XenServer 6.x Best Practices
2.3.1. In this example another HBA port
Figure 35, Enter IP Address Information
2.4. From the Port Network Settings Menu select configure. Enter 2 and to select the second HBA port. Once the second HBA port is sel
choose option 2 (Configure IP Settings) from the Port network settings menu to in
appropriate IP settings for the second HBA port.
Figure 36, Enter IP Address Info
Center XenServer 6.x Best Practices
In this example another HBA port will be configured.
rmation
From the Port Network Settings Menu select option 4 to select an additional HBA port to
and to select the second HBA port. Once the second HBA port is sel
(Configure IP Settings) from the Port network settings menu to in
appropriate IP settings for the second HBA port.
Page 35
l HBA port to
and to select the second HBA port. Once the second HBA port is selected
(Configure IP Settings) from the Port network settings menu to input the
Dell Compellent Storage Center XenServer 6.x Best Practices
2.5. Choose option 5 (Save changes and reset HBA (if nemain menu.
The iSCSI name or IQN can also be changed using the
option 4 (Port Level info & Operations Menu)
Configured Port Settings menu) then
Advanced Settings). Select <Enter> until reaching iSCSI_Name, then enter a unique IQN name for the
adapter.
3. The next step is to establish a target fCenter. 3.1. From the main interactive 3.2. From the Port Level Info & Operations m
Operations) 3.3. On the HBA target menu screen select
3.3.1. Select Enter until reaching the TGT_TargetIPAddress option. Enter the target IP address of the Compellent Controller. (Repeat for each target.)
3.3.1.1. In this example 10.10.64.1 and 10.10.65.2 are used. TheseiSCSI connection on both
Figure 37, Enter Target IP Address
3.3.2. Once all targets are entered for HBA 0 select information.
3.3.3. Select option 10 to select the second HBA port. 3.3.4. Repeat the steps in section 3.3 for the iSCSI targets.
3.4. Enter option 12 to exit. Enter 3.5. Exit out of the iscli utility.
4. Add server iSCSI connection HBAs to the 4.1. Logon to the Storage Center4.2. Expand Servers and select the location or folder to store the server in.
4.2.1. For ease of use the servers in this view are separated into folders based on function.4.3. Right click the location to create the server in and select
Center XenServer 6.x Best Practices
Choose option 5 (Save changes and reset HBA (if necessary). Then select Exit until back at the
The iSCSI name or IQN can also be changed using the iscli utility. This menu can be access by selecting
(Port Level info & Operations Menu) from the main menu, then selecting Option 3
Configured Port Settings menu) then Option 3 (Port Firmware Settings Menu), then Option 7
Advanced Settings). Select <Enter> until reaching iSCSI_Name, then enter a unique IQN name for the
The next step is to establish a target from XenServer so it registers with the Compellent Storage
nteractive iscli menu select option 4 (Port level info & Operations)e Port Level Info & Operations menu select option 7 (---> Target level Info &
enu screen select option 6 (Add a Target) nter until reaching the TGT_TargetIPAddress option. Enter the target IP
address of the Compellent Controller. (Repeat for each target.) In this example 10.10.64.1 and 10.10.65.2 are used. These are the primary
iSCSI connection on both Dell Compellent Storage Center Controllers.
Once all targets are entered for HBA 0 select option 9 to the save the port
to select the second HBA port. Repeat the steps in section 3.3 for the iSCSI targets.
to exit. Enter YES to save the changes. utility.
Add server iSCSI connection HBAs to the Dell Compellent Storage Center. Storage Center console.
Expand Servers and select the location or folder to store the server in. For ease of use the servers in this view are separated into folders based on function.
Right click the location to create the server in and select Create Server.
Page 36
cessary). Then select Exit until back at the
enu can be access by selecting
Option 3 (Edit
Option 7 (Configure
Advanced Settings). Select <Enter> until reaching iSCSI_Name, then enter a unique IQN name for the
rom XenServer so it registers with the Compellent Storage
(Port level info & Operations) > Target level Info &
nter until reaching the TGT_TargetIPAddress option. Enter the target IP
are the primary Controllers.
to the save the port
For ease of use the servers in this view are separated into folders based on function.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 37
Note: You may have to uncheck show only active/up connections in the create a server wizard
4.4. Select the appropriate iSCSI HBA/IQNs for the new server object then click Continue. 4.5. Depending on the Storage Center version select the XenServer Operating system or just select
Other Multipath OS if XenServer is not listed. 5. Repeat preceding 4 steps for each XenServer in the Pool. 6. Once all the XenServer servers are added to the Compellent Storage Center, create a new volume
on the Compellent Storage Center and map it to all the XenServers in the pool with the same LUN
Number, or create a Compellent Clustered server object, add all the XenServers to the Cluster, and
map the volume to the XenServer Clustered server object.
7. The final step of the process is adding the new Volume to XenServer.
7.1. Logon to XenCenter, right click on the appropriate XenServer to add the connection to, and select New Storage Repository.
If the storage is being added to a resource pool, select the Pool instead of the server.
7.2. Select Hardware HBA option as the iSCSI connection is using iSCSI HBAs, then click Next.
Figure 38, Storage Type
There is short delay while XenServer probes for available LUNs.
7.3. Select the appropriate LUN. Give the SR an appropriate Name and click Finish. 7.4. A warning is displayed that the LUN will be formatted and any data present will be destroyed.
Click Yes to format the disk.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 38
Fibre Channel
Overview XenServer provides support for shared Storage Repositories (SRs) on Fibre Channel (FC) LUNs. FC is
supported on the Dell Compellent SAN by utilizing QLogic or Emulex HBAs.
Fibre Channel support is implemented based on the Linux Volume Manager (LVM) and provides the same
performance benefits provided by LVM VDIs in the local disk case. Fibre Channel SRs are capable of
supporting VM agility using XenMotion: VMs can be started on any XenServer host in a resource pool and
migrated between them with no noticeable downtime.
The following sections details the steps involved in adding a new Fibre Channel connected volume to a
XenServer pool.
Adding a FC LUN to XenServer Pool The following section will cover the creation of the Volume on the Compellent Storage Center, the LUN
mapping on the Dell Compellent, and adding the new SR to the XenServer pool.
This procedure assumes that the servers Fibre Channel connection have been zoned to the Dell
Compellent Storage Center and the server objects have been added to the Storage Center.
1. Once all the XenServer servers are added to the Dell Compellent Storage Center, create a new
volume and map it to all the XenServers in the pool with the same LUN Number, or create a
Compellent Clustered server object, add all the XenServers to the Cluster, and map the volume
to the XenServer Clustered server object.
2. When finished mapping the volume to all the XenServers in the Pool launch the XenCenter Management console, right click on the pool name and select New Storage Repository.
Figure 39, New Storage Repository
3. On the Choose the type of new storage screen select Hardware HBA then click Next.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 39
Figure 40, Choose Storage Type
4. On the Select the LUN to reattach or create a new SR screen select the appropriate volume, then enter a descriptive name. Click Finished to continue.
Figure 41, Select LUN
5. A dialog box will appear asking: Do you wish to format the disk? Click Yes to Format the SR. 6. The SR should now be created and mapped to all the servers in the pool.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 40
Data Instant Replay to Recover Virtual Machines or Data
Overview
The Dell Compellent Storage Center system allows for the creation of Data Instant Replays (snapshots)
to recover crash-consistent states of virtual machines.
When mapping Dell Compellent iSCSI or Fiber Channel volumes to XenServer, the SRs will be created as
LVM disks, therefore stamping each SR with a unique identifier (UUID). When creating Dell Compellent
Replays of LVM volumes, the Replay will not be able to be mapped to the XenServer without first un-
mapping the original volume from the server because the LVM UUID will conflict due to being the same.
There are two different options to recover data or virtual machines using Dell Compellent Replays.
Recovery Option 1 – One VM per LUN The first option is the easiest way to recover however it also requires more administration of LUNs.
This recovery option utilizes a 1:1 ratio of virtual machines to LUNs on the Dell Compellent SAN. This
option allows for easy recovery of volumes/virtual machines to the XenServer by creating a local
recovery view of the Volume in Storage Center.
Prior to mapping the Replay to the XenServer(s) remove the mapping to the original volume. Since the
Replay has the same UUID as the original volume, XenServer will reattach to the volume just as if it
was the original.
The following process details how to recover a virtual machine to a previous state using the 1:1
mapping of Virtual Machines to LUNs.
The Dell Compellent System does not limit the number of LUNs that can be created, however the
servers HBA usually have a limitation of 256 LUNs per server.
Recovery Scenario
• XenServer Pool containing two servers, XenServer6P1S1 and XenServer6P1S2.
• All servers are connected to the Dell Compellent Storage Center using Fibre Channel and zoned accordingly.
• The Dell Compellent Storage System has been setup to take hourly Replays of the volume running one virtual machine named W2k8-Xen6.
1. As shown below, a volume is created on the Dell Compellent system and named Xen6_P1_SR2. Also note the replay of this volume created at 08:30:00 PM.
Replays can be manually or automatically generated on the Compellent system by utilizing the Replay
Scheduler or manually through the Storage Center Console.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 41
Figure 42, Compellent Replays
2. The figure below depicts the VM named w2k3-xen5 running.
Figure 43, W2k8-Xen6 Online
In this example a catastrophe strikes w2k8-xen6 rendering it unbootable. By using Dell Compellent Replays the server can be quickly recovered to the time of the last snapshot.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 42
3. Verify the VM is shutdown in the XenServer Console. 4. Highlight the Xen6_P1_SR2 volume hosting w2k3-xen5 and select Forget Storage Repository to
remove this volume from the XenServer Pool. Figure 44, Forget SR
5. Go to the Dell Compellent Storage Center Console and highlight the volume containing the VM. In this example this is the Xen6_P1_SR2 volume.
6. Select the Mapping button. Figure 45, Volume Mapping
7. Note the LUN number for the mapping. 8. Highlight each of the mappings listed individually and select the Remove Mapping button. 9. Select Yes on the Are you sure screen. 10. Select Yes (Remove Now) on the Warnings screen. 11. Repeat until all mappings are removed from the volume.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 43
Figure 46, Remove Mappings
12. With the volume in question selected from the Dell Compellent Storage Center console, click the Replays button. Right click on the replay to recover to and select Create Volume from Replay. In this example it is the replay dated 09/10/2011 08:30:00 pm.
Figure 47, Local Recovery
13. On the Create Volume from Replay screen enter an appropriate name for the Replay Volume and select the Create Now button.
14. On the Map Volume to Server screen select one of the appropriate servers in the pool to map the view volume to, and then select Continue.
15. Go to the Advanced options screen enter the appropriate LUN number then select Continue. In this example LUN 2 is being used as that was the original volume number.
16. When completed select Create Now. 17. This procedure only mapped the volume to one server, if more mappings are required select
the Mappings button and add the appropriate mappings to the volume to represent all the
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 44
servers in the XenServer Pool. In the example below the server XenServer6P1S1 and XenServer6P1S2 are both added to the new View Volume.
Figure 48, Volume Mappings
18. Return to the XenCenter console, right click on the pool and select New Storage Repository.
Figure 49, New SR
19. Select the appropriate type of storage for the volume then select Next. In this example it is a FC connection so hardware HBA should be selected.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 45
Figure 50, SR Type
20. On the Select the LUN to reattach or create a new SR on screen select the appropriate volume, name it accordingly, then select Finish.
Figure 51, Select LUN
21. A message should appear asking if the SR should be Reattached, Formatted or canceled. Select
Reattach.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 46
Figure 52, Reattach SR
22. With the replay of the SR now attached to the Pool, the virtual disk can be mapped to the virtual machine. From XenCenter highlight the server to be recovered then select the Storage Tab. Notice that the server doesn’t have a disks associated with it.
23. Click the Attach button to associate a disk to the VM.
Figure 53, Attach Disk
24. Expand the recovered SR, select the appropriate disk and click Attach.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 47
Figure 54, Select Disk
25. The Virtual machine can now be started in the same state it was in at the time of the last
Replay. In this example the last Replay was taken at 8:30 pm.
Figure 55, Start VM
26. If satisfied with the result the original volume can be coalesced into the new view volume by
following the remaining steps.
CAUTION: Continuing the original volume with the view volume will destroy the original volume.
27. Highlight the original volume, right click on it and choose delete.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 48
Figure 56, Delete Volume
28. Confirm the action by clicking Yes to move the volume to the Recycle Bin. 29. To completely remove the volume from the system, delete the volume from the recycle bin by
expanding the recycle bin, right click on the volume and choose delete. Figure 57, Delete Volume from Recycle Bin
30. Confirm the delete by clicking Yes. 31. The original volume is not removed leaving the recovery volume as the primary volume. Once
the associated replays of the view volume are expired they will be coalesced into the volume as shown below.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 49
Figure 58, Volume with Replays Associated
Figure 59, Replay Coalescing
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 50
Figure 60, Coalescing Complete
Recovery Option 2 – Recovery Server The second option available for recovering virtual machines with Dell Compellent Replays is using a
standalone recovery XenServer. This option is useful when multiple virtual machines are hosted on
each SR as it allows recovery of one VM to a recovery server utilizing Dell Compellent Replays.
As mentioned earlier, there is a limitation that prevents mounting the replay to the same XenServer or
Pool due to the UUID associated with the disks will conflict. Adding a separate standalone XenServer
recovery server allows administrators to map the recovery volume to the recovery server and attach
the SR. A new virtual machine can then be created and mapped to the appropriate virtual disk. The
recovered virtual machines can then be exported and imported back into the production system.
Below is a step by step guide on recovering virtual machines to a standalone XenServer or a Remote DR
site XenServer.
Recovery Scenario
• XenServer Pool containing two servers, XenServer6P1S1 and XenServer6P1S2.
• Standalone (Recovery) XenServer named XenRecovery.
• All servers are connected to the Dell Compellent Storage Center using Fibre Channel and are already zoned accordingly.
• A replay is created on the volume Xen6_P1_SR2.
1. From the Dell Compellent Storage Center console, select the volume to recover and click the Replays Button.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 51
Figure 61, Volume Replays
2. Right click on the replay to recover to and select Create Volume from Replay. In the example below the Replay used is dated 09/11/2011 08:09:54 am.
Figure 62, Local Recovery
3. On the Create Volume from Replay screen enter an appropriate name for the Replay volume and click the Create Now button.
4. On the Select a Server to Map screen select one of the recovery servers to map the view volume to, click Continue.
5. In the Map Volume to Server Advanced options, enter the appropriate LUN numbers for the server port. If mapping to multiple servers set each mapping to the same LUN number. In the example LUN 12 is used. Click Create Now.
When mapping to multiple servers in a Pool use the Storage Center Cluster Server Object. This will create the mapping to all servers with the same LUN number.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 52
Figure 63, LUN Number
6. The next step after mapping the storage to the recovery XenServer is to add the Storage
Repository to the recovery server. A separate copy of XenCenter must be used or the original Pool must first be removed from the
console. XenCenter will not allow the addition of this Storage Repository to the recovery server if it
sees that volume mapped elsewhere.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 53
Figure 64, XenCenter Console
7. From XenCenter right click on recovery XenServer and select New Storage Repository.
Figure 65, New SR
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 54
8. Select the appropriate storage type and click Next.
Figure 66, Select Disk Type
9. Enter a name for the new SR and click Next.
Figure 67, Enter SR Name
10. Select the recovered LUN, name it, and click Finish.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 55
Figure 68, Select Recovery LUN
11. A warning message should appear stating that an existing SR was found on the selected LUN.
click Reattach.
Figure 69, Reattach SR
12. Now that the SR has been added to the recovery server the process of recovering the VMs can
be started. The next step is to create a new virtual machine as a placeholder. 13. Right click on the recovery XenServer and choose New VM.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 56
Figure 70, New Virtual Machine
14. Select the appropriate template for the server then click Next.
Figure 71, OS Template
15. Enter in a name for the server then click Next. Typically the actual server name of the VM being recovered is used.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 57
Figure 72, Virtual Machine Name
16. Click Next on the Locate the operating system installation media screen.
Figure 73, Installation Media
17. Click Next at Select a home server screen.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 58
Figure 74, Select VM Home Server
18. Enter in the appropriate amount of vCPUs and Memory then click Next.
Figure 75, Size CPU and Memory
19. On the screen Enter the information about the virtual disks for the new virtual machine, select a location to store a temporary virtual disk, then click Next. Typically it is best to store the temporary disk on a SR that isn’t being used for recovery.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 59
Figure 76, Temporary SR Disk Location
20. On the Add or remove virtual network interfaces screen click Add, select the appropriate network, then click Next.
Figure 77, Select Network
21. On the Virtual machine configuration is complete screen uncheck Start VM automatically and click Finish.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 60
Figure 78, Uncheck Start VM Automatically
22. From the XenCenter Console select the newly create VM then select the Storage tab. 23. Highlight the virtual disk temporarily attached to the VM and select Delete or Detach. Since
this disk contains no information it is OK to delete it.
Figure 79, Detach Disk
24. Click Yes at the Delete system disk message.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 61
Figure 80, Delete System Disk
25. Once the temporary disk is deleted click the Attach button to select the original disk from the recovered Volume. Expand the recovered LUN and select the appropriate disk to attach.
Figure 81, Attach Disk
NOTE: If there are multiple disks in the Storage Repository with no name, it may take some trial
and error to connect to the correct disk. Use the Storage Tab to detach and reattach disks until
the correct one is selected. Restoring the MetaDate will prevent this issue. If a Virtual Machine
MetaData backup has been taken on the Volume, use the procedure outlined in the “VM MetaData
Back and Recovery” section to recover the names.
From this point the VM can be started, exported, copied etc. Typically the VM would be exported and
imported back into the production Pool.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 62
Dynamic Capacity
Dynamic Capacity Overview
Dell Compellent's Thin Provisioning, called Dynamic Capacity, delivers the highest storage utilization
possible by eliminating allocated but unused capacity. Dynamic Capacity completely separates storage
allocation from utilization, enabling users to allocate any size virtual volume upfront yet only consume
actual physical capacity when data is written by the application.
Dynamic Capacity with XenServer
When XenServer is connected to Dell Compellent storage via iSCSI or Fibre Channel connections the
Storage Repository is created as a LVM (Linux Volume Manager) repository. When the volume is created
on the Dell Compellent System by default the newly created volume consumes zero space. Only when
data is written to the volume will space be acquired and only the written space is consumed.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 63
Data Progression
Data Progression on XenServer
The foundation of Dell Compellent’s Automated Tiered Storage patent is our unique Dynamic Block
Architecture. Storage Center records and tracks specific information about blocks of data, including
time written, time accessed, frequency of access, associated volume, RAID level, and more. Data
Progression utilizes all of this metadata, or “data about the data” to automatically migrate blocks of
data to the optimum storage tier based on usage and performance, unlike traditional systems that
move entire files.
Figure 82, Data Progression
Data Progression automatically classifies and migrates data to the optimum tier of storage, retaining
frequently accessed data on high performance storage and storing infrequently accessed data on lower
cost storage.
XenServer, like other virtualization hypervisors, will contain virtual machines running Windows, Linux,
or other virtual machines that contain stagnant data, data that is read frequently and heavy read/write
data such as transaction logs and pagefiles.
Take a Virtual Machine running a file server for example. A user copies a new file to the file server.
The Dell Compellent system writes the data instantly to Tier 1 Raid 10. The longer the file sits without
any reads/writes, the further the blocks of data that make up the file will transition in the tiering
structure until it reaches Tier 3, Raid 5. Typically less than 20% of data on the file server is accessed
frequently. The Dell Compellent system is optimized to automatically move this data between tiers
without any assistance. In a typical storage solution, an Administrator would have to manually move
files from one Tier to another. This equates to costs savings by storing static data on low-cost, high-
capacity disks and by eliminating the need to manage data manually. Only data that is required to be
on Tier 1 Storage will remain on that Tier.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 64
Boot from SAN In some cases, such as with blade servers that do not have internal disk drives, booting from SAN is the
only option, but a lot of XenServers have internal mirrored drives giving administrators the flexibility to
choose whether to boot from SAN or local disks.
Booting from SAN allows administrators to take Replays of the boot volume, replicate it to a DR site,
and provides for fast recovery to other identical hardware if that XenServer fails. However, there are
also benefits to booting from local disks and having the virtual machines located on SAN resources.
Since it only takes about 30 minutes to install and patch a XenServer, booting from local disks insures
the server will stay online if there is a need to do maintenance to fibre channel switches, Ethernet
switches, or the SAN itself. The other advantage of booting from local disks is that this configuration
does not require iSCSI or FC HBAs. The XenServer can boot from local disk and use the iSCSI software
initiator to connect to shared storage on the SAN.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 65
VM Metadata Backup and Recovery The metadata for a VM contains information about the VM (such as the name, description, and
Universally Unique Identifier (UUID)), VM configuration (such as the amount of virtual memory and the
number of virtual CPUs), and information about the use of resources on the host or Resource Pool (such
as Virtual Networks, Storage Repository, ISO Library, and so on).
Most metadata configuration data is written when the VM is created and is updated when changes to
the VM configuration are made. Adding a metadata export command to the change-control checklist
will ensure that this information is available if needed.
NOTE: Without the Metadata Backup the names and descriptions of files on the SR may not be available
for a recovery. This will make recovery a difficult process.
Figure 83, Conceptual Overview of XenServer Disaster Recovery
Backing Up VM MetaData In XenServer, exporting or importing metadata can be done from the text-based console menu. On the
physical console the menu is loaded by default. To start the console menu through the host console
screen in XenCenter, type: xsconsole from the command line.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 66
Figure 84 Backup, Restore and Update Screen
To export the VM metadata:
1. Select Backup, Restore and Update from the menu. 2. Select Backup Virtual Machine Metadata. 3. If prompted, log on with root credentials. 4. Select the Storage Repository where the desired VMs are stored. 5. After the metadata backup is done, verify the successful completion on the summary screen. 6. In XenCenter, on the Storage tab of the SR selected in step 4, a new VDI should be created
named Pool Metadata Backup.
Figure 85 Backup Summary Screen
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 67
Another option available from the console menu is Schedule Virtual Machine Metadata. This option
allows for automated exports of metadata on a daily, weekly, or monthly basis. By default this option
is disabled.
Importing VM MetaData A prerequisite for running the import command in a DR environment is that Storage Repository(s)
(where the replicated virtual disk images are located) need to be setup and re-attached to a
XenServer. Also make sure that the Virtual Networks are set up correctly by using the same names in
the production and DR environment.
After the SR is attached, the metadata backup can be restored.
From the console menu:
1. Select Backup, Restore and Update from the menu.
2. Select Restore Virtual Machine Metadata.
3. If prompted, log on with root credentials.
4. Select the Storage Repository to restore from.
5. Select the Metadata Backup you want to restore.
6. Select restore only VMs on this SR or all VMs in the pool.
7. After the metadata restore is done, verify the summary screen and check for errors.
8. The VMs are now available in XenCenter and can be started at the new site.
Figure 86 Metadata Restore Summary
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 68
Disaster Recovery XenServer 6 provides the enterprise with functionality designed to recover data from a catastrophic
failure of hardware which disables or destroys a whole pool or site. The XenServer 6 Disaster Recovery
feature provides the mechanism to backup services and applications while Dell Compellent replication
technology provides a means to make this data available at a remote site. Together they provide a
high availability solution for mission critical services and applications
This functionality is extended with XenServer Virtual Appliance (vApp) technology. A vApp is a logical
group of one or more related VMs which can be started as a single entity in the event of a disaster.
When a vApp is started, the VMs contained within the vApp are started based on a predefined order,
relieving the administrator from manually stating servers. The vApp functionality is useful in DR
situation where all VMs in s vApp reside on the same Storage Repository.
NOTE: XenServer Disaster Recovery can only be enabled when using LVM over FC/iSCSI HBA, or software
iSCSI. A small amount of space will be required on the storage for a new LUN which will contain the
pool recovery information.
Replication Overview XenServer Disaster Recovery takes advantage of Dell Compellent’s replication technology to provide
high availability. Dell Compellent replicates volumes in one direction. In a DR scenario, data is
replicated from the primary site to the secondary site. By default, Dell Compellent replication is not
bidirectional; therefore it is not possible to XenMotion between source Storage Center (the primary
site) and destination Storage Center (the secondary site) unless using Dell Compellent Live Volumes for
Replication. The following best practices recommendations for replication and remote recovery should
be considered.
• Compatible XenServer server hardware and OS is required at the DR site to map replicated
volumes to in the event the main XenServer Pool becomes inoperable.
• Since replicated volumes can contain more than one virtual machine, it is recommended to sort
virtual machines into specific replicated and non-replicated Storage Repositories. For example,
if there are 30 virtual machines in the XenServer Pool, and only eight of them need to be
replicated to the DR site, a special "Replicated" volume should be created to place those eight
virtual machines on, or utilize a 1:1 mapping of VMs to Volumes and only replicated the
required VMs.
• Take advantage of the Storage Center QOS settings to prioritize the replication bandwidth of
certain "mission critical" volumes. For example, two QOS definitions could be created so that
the "mission critical" volume would get 80 Mb of the bandwidth, and the lower priority volume
would get 20 Mb of the bandwidth.
The following steps should be taken in preparation for a disaster:
• Configure the VMs and vApps.
• Note how the VMs and vApps are mapped to the SRs and the SRs to Volumes. Verify that the
name_label and name__description are meaningful and will allow an administrator to
recognize the SR after a disaster.
• Configure replication of the SR volume
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 69
After the VM and vApps have been configured the Volumes can be replicate to the secondary DR site.
This process is simplified with Dell Compellent Enterprise Manager (EM) GUI. In the example below, an
SR Volume that resides on a Storage Center named SC13 at the primary location is replicated to a
Storage Center named SC12 at the secondary location. The Dell Compellent Enterprise Manager User
Guide outlines the steps necessary to configure replication.
Figure 87 Enterprise Manager Replication
Disaster recovery can be configured once replication is setup and all data has been replicated to the
secondary site. Follow the steps below to configure Disaster Recovery
NOTE: The examples below server as a reference for the requirements of configuring XenServer DR with
Dell Compellent Storage Center. For complete information on configuring and testing XenDesktop DR
consult the Citrix XenServer 6.0 Administrators Guide.
1. Select the pool at the primary site that will be protected and go to the Pool menu, Disaster
Recovery, and select Configure. This will open the DR configuration window.
Figure 88, Select DR Pool
2. Select the Storage Repositories that will be protected with XenServer DR and click OK to finish.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 70
XenServer DR is now configured on the volume and ready to be tested.
Test XenServer Disaster Recovery
The process bellow will test the configuration of XenDesktop DR, replication and configuration of the
Pool at the secondary site. The process will use a Storage Center View Volume for testing. The View
Volume will be created at the secondary site and mapped to that pool for the DR test. The use of View
Volumes will allow the test to be performed without interrupting replication between the two sites.
The steps below outline the process of testing XenServer DR with Dell Compellent:
NOTE: Be sure the most recent information, including a Replay taken after DR was configured, has
been replicated to the secondary site before testing DR failover.
1. Create a View Volume of the replicated volume at the secondary site by right clicking on the most
recent Replay and selecting Create Volume from Replay.
Figure 89 Create View Volume for DR Test
2. Next, map the new View Volume to the servers in the recovery pool.
3. After the View Volume has been created and mapped, run the Disaster Recovery Wizard by
selecting the recovery pool in XenCenter going to Pool, Disaster Recovery, and selecting Disaster
Recovery Wizard.
4. On the Disaster Recovery Wizard window select Test Failover and click on Next.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 71
Figure 90 Disaster Recovery Failover Test
5. Read the message on the Before You Start screen and click Next to reach the Locate Mirrored SRs
screen. From the Find Storage Repositories dropdown box, select the type of mappings used to
connect the servers in the Pool to the View Volume, either HBA or software iSCSI.
NOTE: Only iSCSI and FC HBAs and Software iSCSI are available for the XenServer DR feature.
Figure 91, Locate Mirrored SR
6. Select the SR to test and click Next to continue. XenServer will mount the SR and discover the VMs
and vApps on the volume.
7. On the next screen, select the VMs and vApps to be tested. Also select the desired option for the
power state after the recovery. Click Next to continue.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 72
Figure 92 Select VMs and vApps to test
8. The Disaster Recovery Wizard will check prerequisites on the next screen. Once the failover
pre-checks are finished, click the Fail Over button to continue the test. The test may take
some time depending on the number of VMs involved. During this time, the VMs and vApps that
were selected in the previous step will be created in the secondary Pool and started if that
option was selected.
Figure 93 Failover Test Progress
9. The progress screen will show the status of the DR process.
10. Clicking Next will display the summary of the test. Also, the VMs and vApps will be removed
from the Pool as well as the replicated volume.
11. Clicking Finish at the Summary of Test Failover screen will conclude the test.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 73
Recovering from a Disaster The steps to recover from a disaster are similar to testing a failover with a few exceptions. Below are
the steps to take when recovering from a disaster
• Break replication between the primary and secondary site
• Shut down the VMs and vApps at the primary site if they are still running
• Ensure that the recovery volume at the secondary sit is not attached to any other pool. If the
volume is attached to multiple pools data corruption may occur.
There are two options to prepare the volume at the secondary Storage Center for a failover. The first
option is to create and map a View Volume to the servers in the recovery pool. This is the same
process as outlined in the failover test above and is the preferred method for recovering from a
disaster.
The second option is to remove replication and mount the replicated volume. This can be done by
removing the replication in Enterprise Manager and adding mappings to the servers in the recovery pool
at the secondary site.
1. To remove replication, go to Replications in EM and select the source Storage Center.
2. Right Click on the volume and select Delete. This will bring up the Delete Replication screen.
Figure 94 Delete Replication in EM
3. Be sure that “Put Destination Volume in the Recycle Bin” is NOT selected and click OK.
4. Alternatively, if the Storage Center at the source site is not available, the source Storage Center
mappings can be removed from the “Mapping” tab under the destination volumes properties. This
will prevent replication to the volume if the source comes back online.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 74
Figure 95 Remove Source Mapping
Once the replication has stopped the volume at the secondary site can be mapped to the servers in the
recovery pool.
1. To begin the failover process, select the recovery pool and go to Pool, Disaster Recovery Wizard
and select Failover on the Welcome screen.
Figure 96, Disaster Recovery Failover
2. Click Next on the Before you start screen and use the Find Storage Repositories dropdown to
locate the recovery SR that was mapped to the servers in a previous step. Repeated this process for
each SR to be recovered. Click Next when finished.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 75
Figure 97, Select Mirrored SR
3. Select the VM’s and vApps that are to be recovered. Select the appropriate Power State after
Recovery option and click Next.
Figure 98, Select vApps and VMs to Fail Over
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 76
4. Resolve any pre-check errors and click Fail Over to begin the failover process. This may take some
time depending on the number of VMs and vApps to be recovered.
Figure 99, DR Failover Progress
5. Once the DR process has completed a summary page displays the status of each vApp and VM and
its status. Click Finish to exit the wizard.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 77
Replication Based Disaster Recovery Citrix XenServer 6.0 introduces the new automated DR option as outlined above. This Desaster
Recovery tool is available for environments with a Platinum Software subscription only. For users
without a Platinum Software subscription there is still a semi-automated option for recovering virtual
machines at a DR site. This method leverages the Virtual Machine Metadata backup and Dell
Compellent replication to make VMs available at a recovery site. The basic steps involved include:
1. Configure replication of the protected volumes from the primary site to the secondary site.
2. Backup LUN Meta Data to each Replicated LUN.
3. Create a local recovery View volume from a Replay on the replicated XenServer SR volume.
4. Map that View Volume to the XenServer hosts(s).
5. Add Storage Repository to XenServer hosts(s).
6. Restore Virtual Machine MetaData.
Disaster Recovery Replication Example
This scenario will step through the recovery of a XenServer Pool and all its volumes on a remote DR
Server Pool. This scenario will utilize two Dell Compellent Storage Centers performing one way
replication from the source system to the destination system.
DR Environment:
Primary Site:
• Dell Compellent Storage Center SC13
• One Pool (Pool1) consisting of servers xenserver6p1s1 and xenserver6p1s2
• One FC connected Volume labeled Xen6_P1_SR1
• Virtual Machine MetaData has been backed up using the steps outlined in the VM Metadata
Backup and Recovery section
Secondary Site:
• Compellent Storage Center SC12
• One Pool (Pool2) consisting of servers xenserver6p2s1 and xenserver6p2s2
• FC connected replicated volume
The figure below shows the Pool1 Servers, data store and VM’s within the SR. Note also that the Pool
Metadata Backup exists on the SR.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 78
Figure 100 Primary Site Servers and SR
After the VM and vApps have been configured the Volumes can be replicate to the secondary DR site.
This process is simplified with Dell Compellent Enterprise Manager (EM). In the example below, the SR
Volume that resides on a Storage Center named SC13 at the primary location is replicated to a Storage
Center named SC12 at the secondary location. The Dell Compellent Enterprise Manager User Guide
outlines the steps necessary configure replication between the Storage Centers.
Figure 101 Enterprise Manager Replication
Next, a disaster is simulated by removing the replication jobs between the primary and secondary
Storage Center in Enterprise Manager.
1. Replication can be removed in Enterprise Manager by going to Replications and selecting the
source Storage Center. This will list the replications from that Storage Center.
2. Right Click on the volume and select delete. This will bring up the delete replication screen.
Be sure that “Put Destination Volume in the Recycle Bin” is NOT selected and click OK.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 79
Figure 102 Delete Replication in EM
NOTE: A disaster test could have been done by simply creating a View Volume from one of the replays
on the DR Storage Center system. This process would allow the testing of a DR plan to validate data at
any time without disrupting replication
3. Next, the servers at the secondary site are mapped to the volume. In this example, servers
XenServer6P2S1 and XenServer6P2S2 are mapped to the volume “Repl of Xen6_P1_SR1”.
Figure 103, Server Mapping to the Recovery Volume
4. After the volume is mapped to the servers in pool2 at the secondary site it can be attached
using the New Storage Wizard in XenCenter. The figure below shows the storage attached to
the secondary pool. The VM files are on the storage but not yet available in the pool.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 80
Figure 104, Recovery Pool
5. To add the VMs to the recovery pool the Metadata will need to be restored using the XenServer
Console Backup, Restore and Update menu. It is important that the VM networks are named
exactly the same in order for this to succeed.
Figure 105, VM Metadata Restored
6. After the Metadata is restored the VM’s will be available at the secondary site. Once the
Virtual Machine MetaData is restored, the Virtual machines can be started on the remote DR
XenServer.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 81
Figure 106, Recovered VMs
After the recovery to the secondary site it may be necessary to fail back to the primary site. The
failback process is the same as outlined above, except for modifying the primary and secondary site to
reflect the VMs source and destination location.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 82
Live Volume
Overview Live Volume is a software option for Compellent Storage Center that builds upon the Fluid Data
architecture. Live Volume enables non-disruptive data access and migration of data between two
Storage Centers.
Figure 107, Live Volume Overview
Live Volume is a software-based solution integrated into the Dell Compellent Storage Center
controllers. Live Volume is designed to operate in a production environment, allowing both Storage
Centers to remain operational during volume migrations.
Live Volume increases operational efficiency, reduces planned outages, and enables a site to avoid
disruption during anticipated disasters. Live Volume provides these powerful new options:
• Storage Follows the Application in Virtualized Server Environments. Live Volume automatically migrates data as virtual applications are moved.
• Zero Downtime Maintenance for Planned Outages. Live Volume enables all data to be moved non-disruptively between Storage Centers, enabling full planned site shutdown without downtime.
• On-demand Load Balancing. Live Volume enables data to be relocated as desired to distribute workload between Storage Centers.
• Stretch Microsoft, VMware, and XenServer Volumes between geographically disperse locations. Live Volume allows servers to see the same disk signature on the volume between datacenters thereby allowing the volume to be clustered.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 83
Live Volume is designed to fit into existing physical and virtual environments without disruption and
without requiring extra hardware or changes to configurations or workflow. Physical and virtual servers
see a consistent, unchanging virtual volume. All volume mapping is consistent and transparent before,
during, and after migration. Live Volume can be run automatically or manually and is fully integrated
into the Storage Center software environment. Live Volume operates asynchronously and is designed
for planned migrations where both Storage Centers are simultaneously available.
A Live Volume can be created between two Dell Compellent Storage Centers residing in the same
datacenter or between two well-connected datacenters.
Using Dell Compellent Enterprise Manager, a Live Volume can be created from a new volume, an
existing volume, or an existing replication. For more information on creating Live Volume, see the
Compellent Enterprise Manager User Guide.
For more information on the Best Practices for Live Volume please see the Dell Compellent Storage
Center Best Practices Document for Live Volume on the Dell Compellent Knowledge Center Portal at
http://kc.compelent.com .
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 84
Appendix 1 Troubleshooting
XenServer Pool FC Mapping Issue Occasionally when connecting a FC Volume to a XenServer Pool is the mapping is only made on the
Master node in the pool and is not connected on the additional nodes. This typically takes place if
attempting to attach the Volume right after the creation of the Volume. In most instances, waiting
approximately one hour before mapping the volume will prevent this issue from occuring.
The following section details the steps necessary to fix the missing connection issue when mapping a
New SR to a XenServer pool without rebooting the hosts or moving the Master.
Notice in the figure below that the SR mapped to Pool1 is mapped correctly to the host server
XenServer6P1S1 but not to the server XenServer6P1S2.
Figure 108, SR Mapping Broken
To resolve this issue logon to the console of one of the XenServers in the pool and go to the local
command shell. This can be done either from the console or from a SSH client such as PuTTy.
At the command prompt type: xe host-list to obtain the list of all the server in the pool and their
associated UUIDs. [root@ XenServer6P1S1 ~]# xe host-list
uuid ( RO) : 5cd5d2ed-b462-4eba-9761-d874b8e3e564
name-label ( RW): XenServer6P1S 1.techsol.local
name-description ( RO): Default install of XenServer
uuid ( RO) : be925e21-a95e-438d-8155-b98d09c26351
name-label ( RW): XenServer6P1S 2.techsol.local
name-description ( RO): Default install of XenServer
[root@ XenServer6P1S1 ~]#
1. Run the SR-Probe command for each of the XenServer hosts not mapping the volume correctly.
Type the following to probe the host: xe sr-probe host-uuid=<uuid or server> type=lvmohba
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 85
[root@ XenServer6P1S1 ~]# xe sr-probe host-uuid=5cd5d2ed-b462-4eba-9761-d874b8e3e564 type=lvmohba
7. Once the sr-probe command has been completed for all the hosts the SR can be repaired by
right clicking the SR from the XenCenter console and selecting Repair Storage Repository.
Figure 109, Repair Storage Repositories
2. Click the Repair button. 3. When the repair is complete all nodes should report back as Connected.
Figure 110, Repaired SR
Starting Software iSCSI Software iSCSI may need to be started manually when iSCSI commands are ran on the server and the
following errors occur:
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 86
Cannot perform discovery. Initiatorname required
or
iscsid is not running. Could not start up automatically using the startup command….
Two ways to Start iSCSI
1. Through the GUI:
The process of adding storage with XenCenter will start iSCSI on the server. Go to XenCenter,
Select the host and go to New storage> Software iSCSI> Enter control port in target host>
discover IQN’s, Discover LUNs. The step “Discover LUNs” will start iSCSI on the host. Cancel
out of the Add Storage Wizard to quit without making changes.
2. Through the Command Line:
Start the iSCSI service:
service open-iscsi start
Run sr-probe:
xe sr-probe type=lvmoiscsi device-config:target=x.x.x.x
where x.x.x.x is the Storage Center Control Port
Software iSCSI Fails to Start as Server Boot See Citrix Document CTX122852 to configure auto start. These steps should to be run after iSCSI has
been started and has scanned the Storage Center. Scanning the Storage Center will create the node
IQN files referenced in the Citrix Document on the server.
Wildcard Doesn’t Return All Volumes When adding storage using the wildcard option, an incomplete list of Volumes mapped to the server
may be returned. In this situation, XenCenter only scans the first IP address listed in the Target Host
field, resulting in a an incomplete listing of Target LUNS. This is a know issue with the XenCenter GUI.
To work around, cycle through the Storage Center Control Ports in the Target Host field. Be sure to
always use the (*) Wildcard Target IQN when discovering LUNs. This is a GUI issue and will not affect
multipathing.
Caution: Issues have been identified with the Citrix implementation of multipathing and Storage
Center in virtual port mode. It is strongly recommended to use iSCSI HBAs when implementing
XenServer with Storage Center in virtual port mode.
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 87
Figure 111 Finding Target LUNs
View Multipath Status Use the iSCSIadm -m session command to view the active software iSCSI sessions on the server.
Use mpathutil status command to view the status of multipathing
If only one path is showing in a multipath environment, typically after a reboot
Figure 112, Multipath Issue
Run the iscsiadm -m node --login command to force the iSCSI software initiator to connect both
paths
Figure 113, Multipath Active
Following the steps outlined in Citrix Document CTX122852 may resolve this issue.
XenCenter GUI displays Multipathing Incorrectly If the GUI does not display multipathing information correctly, cycle the multipath server with this
command:
Service multipathd restart
Next, run the Python script to update XenCenter
/opt/xensource/sm/mpathcount.py
Dell Compellent Storage Center XenServer 6.x Best Practices
Page 88
Connectivity issues with a Fibre Channel Storage Repository If there are issues with a Fibre Chanel SR, first identify the host UUID with this command
xe host-list
Next, probe the SR using this command
xe sr-probe host-uuid=<uuid of server> type=lvmohba
Go into XenCenter and select the server. Right click on the SR and select the Repair Storage
Repository Option
Figure 114, Repair Storage Repository