citrix xenserver dell edition - amazon web...
TRANSCRIPT
Citrix XenServer Dell Edition
Solution Guide
Notes, Notices, and Cautions
NOTE: A NOTE indicates important information that helps you make better use of
your computer.
NOTICE: A NOTICE indicates either potential damage to hardware or loss of data
and tells you how to avoid the problem.
____________________
Information in this document is subject to change without notice.© 2008 Dell Inc. All rights reserved.
Reproduction in any manner whatsoever without the written permission of Dell Inc. is strictly forbidden.
Trademarks used in this text: Dell, the DELL logo, EqualLogic, OpenManage, PowerVault, and PowerEdge are trademarks of Dell Inc.; Xen, XenServer, XenCenter and XenMotion are either registered trademarks or trademarks of Citrix Inc.; Intel and Xeon are a registered trademarks of Intel Corporation in the U.S. and other countries; AMD is a registered trademark and Opteron is a trademark of Advanced Micro Devices Inc.; Microsoft, Windows, Windows Vista, and Windows Server are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries.
Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than its own.
May 2008 Rev. A01
Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Citrix XenServer Dell Edition Features . . . . . . . . . . . . . . . 5
Citrix XenServer Dell Edition Licensing Options . . . . . . . . . . 6
Supported Configurations . . . . . . . . . . . . . . . . . . . . . . . 7
Supported Hardware . . . . . . . . . . . . . . . . . . . . . . . . 8
XenServer Virtual Machine Operating System Support . . . . . . 9
XenMotion Support Requirements . . . . . . . . . . . . . . . . 11
Configuring XenCenter . . . . . . . . . . . . . . . . . . . . . . . . 11
Initial Boot Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Systems Management Using Dell OpenManage . . . . . . . . . . . 19
Supported OpenManage Software Versions . . . . . . . . . . 19
Dell OpenManage Components and Support Information . . . . 19
Using Dell OpenManage in Citrix XenServer Dell Edition
environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Using Dell IT Assistant in a Citrix XenServer Environment . . . 23
Change Management in XenServer Dell Edition . . . . . . . . . 24
Configuring Storage . . . . . . . . . . . . . . . . . . . . . . . . . 26
Local Storage Repository Configuration . . . . . . . . . . . . . 26
Deploying Citrix XenServer Dell Edition with Dell Storage
Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Moving an SR Between Hosts . . . . . . . . . . . . . . . . . . 55
Resizing An SR After Changing the Size of An LVM-Based
Storage Volume. . . . . . . . . . . . . . . . . . . . . . . . . . 56
Recovering the XenServer Host and VMs . . . . . . . . . . . . . . 57
Database Backup From XenServer Local Console . . . . . . . 58
Contents 3
Using the Recovery CD . . . . . . . . . . . . . . . . . . . . . . 58
Restoring XenServer Database. . . . . . . . . . . . . . . . . . 59
Recovering XenServer After a Board Replacement . . . . . . . 60
Resetting the Root Password . . . . . . . . . . . . . . . . . . . 60
Recovering Virtual Machines in a XenServer Pool . . . . . . . 60
Updating the XenServer Image to A New Version . . . . . . . . . . 64
Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
IP storage traffic segregation and high availability
configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Setup of Remote Database on iSCSI LUN for Diskless Hosts . . 69
Scripted Backup of XenServer Host Database . . . . . . . . . 70
Using storage array snapshots in a Citrix XenServer
environment . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Adjusting SCSI timeouts to tolerate storage controller
failures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Configuring XenServer Management Network for High
Availability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
XenServer Dell Edition Local Console Menu Items . . . . . . . 77
Support Peripherals and Device Drivers in Citrix XenServer
Dell Edition 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . 84
Troubleshooting. . . . . . . . . . . . . . . . . . . . . . . . . . 85
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
4 Contents
IntroductionWith the 64-bit open-source Xen hypervisor at its core, Citrix® XenServer Dell Edition™ is a powerful virtualization solution that enables efficient resource consolidation, utilization, dynamic provisioning, and integrated systems management. XenServer Dell Edition has a small foot print and is optimized to run from an internal flash storage in Dell PowerEdge™ servers. Dell and Citrix have partnered to bring pre-qualified and virtualization-ready platforms for today’s dynamic and growing data centers. This guide provides information on Citrix XenServer Dell Edition features, supported hardware, reference configurations, and best practices.
Citrix XenServer Dell Edition Features
• Factory installed from Dell
Citrix XenServer Dell Edition is factory installed from Dell on select PowerEdge servers. This reduces the installation and deployment time required and simplifies the deployment process to get your infrastructure ready to use. With minimal configuration, the XenServer host is available to run virtual machines.
• Integrated systems management and monitoring
Citrix XenServer Dell Edition comes pre-installed with Dell OpenManage™ Server Administrator. This enables systems management right out of the box without any additional need to install an agent on the host.
• New XenServer Local Console
The XenServer Dell Edition includes a new XenServer Local Console user interface to enable local administration of the host. XenServer Local Console enables users to configure and view host specific properties such as management network configuration, local storage for virtual machines, etc. XenCenter, the standard Microsoft® Windows® management console for the XenServer, is also available.
• Optimized footprint and controlled environment
XenServer Dell Edition is optimized for a smaller disk footprint and writes to flash storage. The majority of the XenServer Dell edition file system is Read Only and thus provides a tighter control over the XenServer operating environment. The XenServer Dell Edition host agent software
Solution Guide 5
has been significantly optimized to minimize the number of write cycles to the flash storage. XenServer writes to flash storage only when something important has changed and must be recorded. Write minimization helps improve the life of the flash storage.
• Improved reliability and diskless configurations
Running XenServer Dell Edition on a server’s internal flash storage provides improved reliability over running on traditional hard disks. Flash storage does not have moving parts and is more reliable than a hard disk.
Since XenServer resides on the server’s internal flash storage, there is no need for local hard disks on the server. XenServer hosts can connect to remote iSCSI or NFS storage and take advantage of features such as XenMotion to minimize virtual machine downtime and workload migration.
• Improved XenServer updates
To improve reliability of software upgrades, the XenServer Dell Edition image contains a primary and secondary copy of the XenServer filesystem. Any time an update is applied, the secondary copy gets updated leaving the primary copy in a known good state. The secondary copy becomes the primary after a successful upgrade. An update to the XenServer host can be applied using the XenServer local console or XenCenter.
• Pre-certified and supported configurations
Citrix XenServer Dell Edition is certified and fully supported by Dell for select server and storage configurations. Both Dell and Citrix have worked closely to provide the highest quality product and user experience.
Citrix XenServer Dell Edition Licensing Options
Dell provides the following licensing options for Citrix XenServer Dell Edition:
Citrix XenServer Dell Express Edition supports a single XenServer Host with up to 4 sockets (or multiple XenServer Hosts, one at a time), up to 128GB physical RAM, hosting up to 4 concurrent VMs and up to 32GB of RAM per VM.
6 Solution Guide
Citrix XenServer Dell Enterprise Edition supports multiple simultaneous XenServer Hosts with up to 128GB physical RAM and no limit on the number of concurrent VMs except the amount of available RAM. It also offers the following additional features:
• clustering of XenServer Hosts into resource pools
• support for NFS, iSCSI and SAS shared storage repositories
• live relocation (XenMotion) of VMs within the same resource pool
• additional Quality of Service (QoS) control for VMs
• multi-host management using single XenCenter console
From the Dell factory, the PowerEdge server comes enabled for Citrix XenServer Dell Express Edition and requires only a license key to enable Citrix XenServer Dell Enterprise Edition. There are 2 options to get this license key:
• If you purchased Citrix XenServer Dell Enterprise Edition, you must redeem the activation code from the license card you received with your server. Please visit www.citrix.com/xenserver/dell (registration may be required) and redeem your authorization code(s) for Enterprise license files.
• To purchase an upgrade license key, visit www.dell.com/xenserver or contact your Dell sales representative.
You may apply your Enterprise license file(s) by using the XenServer local console or XenCenter.
Supported Configurations
Table 1. Support matrix
Dell PowerEdge Server Citrix XenServer Version OpenManage Version
PowerEdge R805 Citrix XenServer Dell Edition 4.1
OpenManage 5.4
PowerEdge 2950 III Citrix XenServer Dell Edition 4.1
OpenManage 5.4
Solution Guide 7
Supported Hardware
Servers
Network Interface Cards
Storage Controllers
Server Model Processors supported
PowerEdge 2950 III Intel® Xeon® 5100, 5200, 5300 and 5400 series
PowerEdge R805 AMD® Opteron™ 2200 and 2300 series
Vendor Product
Broadcom Broadcom® NetXtreme 5722 Single Port Gigabit Ethernet NIC
Broadcom Embedded Broadcom® NetXtreme II 5708 Gigabit Ethernet NIC
Broadcom Broadcom® NetXtreme II 5708 1-Port Gb Ethernet NIC
Intel Intel® PRO 1000PT Single Port Server Adapter
Intel Intel® PRO 1000PT Dual Port Server Adapter
Intel Intel® PRO 1000PF Single Port Server Adapter
Intel Intel® PRO 1000VT Quad Port Gigabit NIC
Intel Intel® 10GBase-SR XF Single Port 10GbE NIC
Vendor Product
Dell SAS 5/E Controller
Dell SAS 6/iR Integrated Controller
Dell PERC 6/E SAS RAID Controller
Dell PERC 6/i Integrated SAS RAID Controller
8 Solution Guide
Storage Arrays
NOTE: Refer to "Configuring Storage" on page 26 for more details on reference
configurations.
XenServer Virtual Machine Operating System Support
XenServer virtual machines (VMs) are created from templates. A template is a file that contains all of the various configuration settings to instantiate a specific VM. XenServer Dell Edition ships with a base set of templates, some of which can boot an OS vendor installation CD/DVD and others, which run an installation from a network repository. Refer to Table 2 for a list of supported OSs and installation methods.
Creating VMs of the various supported types is described in detail in the XenServer Virtual Machine Installation Guide at www.citrix.com/xenserver/dell.
Additionally, VMs can be created by importing an existing exported VM.
NOTE: The standard editions of XenServer provide two Debian templates that each
contain a complete basic Debian Linux distribution, and also supports physical to
virtual conversion (P2V) of existing instances of Red Hat Enterprise Linux 3.6, 3.8,
4.2-4.3, and SUSE Linux Enterprise Server 9 SP2 and SP3. Because of the space
constraints imposed by the flash media on which XenServer Dell Edition runs,
creating VMs of these types is not supported by XenServer Dell Edition. Instances
of those VMs created and exported with the standard editions of XenServer,
however, can be imported to XenServer Dell Edition.
Array Name Notes
PowerVault™ MD1000 Direct attached storage
PowerVault NX1950 Support for providing only NFS storage to XenServer hosts
PowerVault MD3000 Direct attached, fully redundant configuration to provide shared storage for up to two hosts
PowerVault MD3000i Single-path support only, no RAID controller failover
EqualLogic™ PS5000E Multi-path iSCSI, fully redundant configuration
EqualLogic PS5000X Multi-path iSCSI, fully redundant configuration
EqualLogic PS5000XV Multi-path iSCSI, fully redundant configuration
Solution Guide 9
In general, when installing VMs, be sure to follow the memory and disk space guidelines of the operating system and any relevant applications that you want to run when allocating resources such as memory and disk space. Recommended memory and disk space guidelines are further described in the XenServer Virtual Machine Installation Guide.
NOTE: Individual versions of the operating systems may also impose their own
maximum limits on the amount of memory supported (e.g. for licensing reasons).
Table 2. Supported Operating Systems and Installation Methods for VMs
Operating system Vendor Install from CD Vendor Install from
network repository
Windows Vista® Enterprise Edition,32-bit
X
Windows Server® 2003, Enterprise, Data Centre, and SBS editions, 32-bit, SP0, SP1, SP2, and R2
X
Windows Server 2003 Server, Enterprise, Data Center, and SBS editions, 64-bit
X
Windows XP SP2 X
Windows 2000 SP4 X
CentOS 4.5-4.6 X X
CentOS 5.0-5.1 32-bit X X
10 Solution Guide
XenMotion Support Requirements
XenMotion is the capability for a running virtual machine to be migrated between physical hosts within a XenServer resource pool with no interruption in service. XenMotion is only possible among hosts that are part of a Resource Pool. A Resource Pool is an aggregate of one or more homogeneous XenServer hosts, up to a maximum of 16.
The definition of homogeneous is:
• each CPU is from the same vendor
• each CPU is the same model (except for stepping)
• each CPU has the same feature flags
For more information on pool requirements other than processor models, please refer to the XenServer Administrator's Guide at www.citrix.com/xenserver/dell.
Configuring XenCenterXenCenter is the client application for managing the XenServer host and its virtual machines. Table 3 outlines the system requirements for XenCenter.
CentOS 5.0-5.1 64-bit X X
Oracle Enterprise Linux 5.0-5.1 32-bit
X X
Oracle Enterprise Linux 5.0-5.1 64-bit
X X
Red Hat Enterprise Linux 4.1, 4.4, 4.5, 4.6
X
Red Hat Enterprise Linux 5.0-5.1 32-bit
X X
Red Hat Enterprise Linux 5.0-5.1 64-bit
X X
SUSE Linux Enterprise Server 10 SP1
X
Solution Guide 11
Table 3. XenCenter System Requirements
The installer for XenCenter is available for download under "Technical Support" section at www.dell.com/xenserver.
To install XenCenter, perform the following steps:
1 Before installing XenCenter, be sure to uninstall the previous version if one exists.
2 Browse to the directory where you downloaded the XenCenter installer and find the file. Double-click on the file's icon to launch the application installer.
3 Follow the instructions displayed in the installer window. When prompted for installation directory, either click Browse to change the default installation location, or click Next to accept the default path C:\Program Files\Citrix\XenCenter.
When complete, there will be a XenCenter group on the All Programs list.
NOTE: By default, XenCenter allows saving of usernames and passwords. To
disable this, use the registry Editor, navigate to the key
HKEY_CURRENT_USER\Software\Citrix\XenCenter and add a key named
AllowCredentialSave with the string value false. This will cause XenCenter to no
longer save usernames or passwords, and disables the Save and Restore
Connection State dialog box in XenCenter (Tools -> Save and Restore).
To uninstall XenCenter, perform the following steps:
1 Select Control Panel from the Start menu.
2 In Windows XP, 2000, or 2003, select Add or Remove Programs. In Windows Vista, select Programs and Features.
3 A list of programs installed on the computer is displayed. Scroll down if necessary and select XenCenter.
Operating system Windows XP, Windows Server 2003, or Windows Vista, with .NET framework version 2.0 or above installed
CPU Speed 750 MHz minimum, 1 GHz or faster recommended
RAM 1 GB minimum, 2 GB or more recommended
Disk space 100 MB minimum
Network interface card 100Mb or faster NIC
12 Solution Guide
4 In Windows XP, 2000, or 2003, click the Remove button. In Windows Vista, select Uninstall from the toolbar above the list of programs.
5 This will remove the XenServer application. At the end, a message is displayed. Click OK to close the message box.
To use XenCenter:
1 Launch XenCenter by double-clicking the XenCenter icon on the Windows desktop, or selecting XenCenter from the Start menu.
2 Click Add New Server, or select Connect from the Server menu. The Add New Server dialog box appears.
3 Provide the IP address or hostname of the server and the root password in their respective fields, and then click Connect. XenCenter will connect to the server and it will appear in the Resources pane along the left side of XenCenter.
For full details on using XenCenter to manage your Dell XenServer host, see the Online Help. Help can be accessed by selecting Help Contents from the Help menu. You can also use the F1 key to access context-sensitive help from any screen, dialog box or wizard.
Initial Boot SetupThis section describes the first boot setup for a XenServer host. After you power on your server, Citrix XenServer Dell Edition will automatically boot from the internal flash storage in the server. Once booted, you will see the XenServer local console menu. The XenServer local console provides a range of management and configuration options for the XenServer host.
Follow the steps below to configure the XenServer host.
1 Read through the system’s Rack Install Guide and Getting Started Guide before proceeding to rack the server (see Figure 1).
Solution Guide 13
Figure 1. System documentation
2 Connect the network cables to the appropriate NIC connectors. Figure 2 shows sample network configuration.
Figure 2. Sample network configuration
3 Power on the server. Citrix XenServer boots automatically. Once booted, the XenServer local console menu is displayed. The XenServer local console menu provides a range of management and configuration options for the XenServer host. Change the root password when prompted, as illustrated in Figure 3.
14 Solution Guide
Figure 3. Change password screen
4 If you have connected Gb1 port, the XenServer management interface is configured on Gb1 with DHCP settings by default. Dell recommends using a static IP address and the physical interface as per your network environment. Go to Network and Management Interface→ Configure Management Interface. Select the physical network interface you plan to use to manage XenServer. Provide a valid IP address, netmask, gateway and hostname for management interface, as illustrated in Figure 4.
Figure 4. Management interface configuration
Solution Guide 15
5 XenCenter uses free-form names to refer to XenServer hosts. It is recommended that you copy the hostname set in step 4 to the XenCenter name for the XenServer host. See Figure 5.
Figure 5. XenCenter Hostname configuration
6 On the Network and Management Interface screen, select Add/Remove DNS Server and add valid DNS server(s), as illustrated in Figure 6. Select Network Time (NTP) and configure your XenServer host to synchronize with a NTP server(s) in your network (if available).
Figure 6. DNS configuration
16 Solution Guide
7 Download the XenCenter software from the "Technical Support" section at www.dell.com/xenserver. Install XenCenter on the management station you plan to use to manage your XenServer host(s). See Figure 7.
Figure 7. XenCenter setup wizard
8 On the management station, start the XenCenter application. In the XenCenter wizard, select Add New Server, provide the hostname/IP address and login information (configured in step 3) for your XenServer host and click Connect. You are now ready to start managing your XenServer host using XenCenter, as illustrated in Figure 8. For more information on advanced setup procedures, refer to product documentation available under "Technical Support" section at www.dell.com/xenserver.
Solution Guide 17
Figure 8. XenCenter
9 If you purchased Citrix XenServer Dell Enterprise Edition, you must redeem the activation code from the license card you received with your server. Please visit www.citrix.com/xenserver/dell (registration may be required) and redeem your authorization code(s) for Enterprise license files. You may apply your Enterprise license file(s) by using the XenServer local console or XenCenter. To apply the file in the local console, see "Install XenServer License" on page 79. To apply the file in XenCenter, highlight the host, click the Server drop-down menu, click Install License Key, and then select the file (see Figure 9).
Figure 9. Install License Key
18 Solution Guide
Systems Management Using Dell OpenManageThis section provides support information, introduction and usage for the Dell™ OpenManage™ systems management suite on Dell PowerEdge servers running Citrix XenServer Dell Edition.
Supported OpenManage Software Versions
Supported OpenManage versions on Dell PowerEdge Servers with Citrix XenServer Dell Edition are listed in Table 1.
Dell OpenManage Components and Support Information
Dell OpenManage is a suite of system management applications for managing Dell PowerEdge servers. This section lists the features available in Dell OpenManage and details what is supported and what is not supported when using Dell OpenManage with Citrix XenServer Dell Edition.
For more information on each of these features, refer to the Dell OpenManage website at www.dell.com/openmanage.
Dell OpenManage Server Administrator
Server Administrator provides single server management with a secure command-line or web-based graphical management user interface. There are several sub-components in Server Administrator as described below. Citrix XenServer Dell Edition comes pre-installed with the majority of OMSA components, providing ready to use systems management framework without any additional software installation requirement.
INSTRUMENTAT ION SERV ICES — Provides hardware instrumentation and configuration information. Instrumentation Services are pre-installed in Citrix XenServer Dell Edition
STORAGE MANAGEMENT — Provides monitoring and instrumentation of the storage connected to Dell PERC and SAS family of controllers. OpenManage Server Administrator Storage Management is pre-installed in Citrix XenServer Dell Edition.
DSM SA CONNECT ION SERV ICE — Provides remote/local access to OMSA from any system with a supported Web browser and network connection.
Solution Guide 19
DSM SA SHARED SERV ICES — Runs inventory collector at startup to perform a software inventory of the system to be consumed by OMSA's SNMP and CIM providers. This allows a remote software update using Dell's system management console, Dell IT Assistant (ITA).
Dell Remote Access Controller
Dell Remote Access Controller (DRAC) is designed to allow anywhere, anytime "Lights Out" monitoring, troubleshooting, and server repairs/upgrades independent of OS status.
Remote Access Controller is supported with Citrix XenServer Dell Edition.
Dell IT Assistant
Dell IT Assistant (ITA) provides an integrated view of Dell's comprehensive suite of server monitoring and reporting tools. It includes one-to-many management for Dell Servers.
Hardware monitoring of Dell servers is supported with Citrix XenServer Dell Edition. Power monitoring of Dell servers is supported on PE servers that have power monitoring capability. Performance monitoring of Dell servers is not supported with Citrix XenServer Dell Edition.
Dell Systems Build and Update Utility 1.0
Dell Systems Build and Update Utility provides functionalities to update and deploy Dell servers. It replaces the functionality offered by Dell OpenManage Server Assistant. It also contains basic functionality provided by Dell OpenManage Server Update Utility and Dell OpenManage Deployment Toolkit. Dell Systems Build and Update Utility can be used to update the system firmware, configure system components such as DRAC and configure RAID groups through a graphical interactive wizard.
Server Update Utility
Server Update Utility helps simplify single server updates with the latest system software features including inventories, reports, and recommendations - and checks for prerequisite conditions.
Server Update Utility (command line option only) is supported with Citrix XenServer Dell Edition.
20 Solution Guide
Dell Update Package (DUP)
As the central component of the OpenManage server management family, Dell Update Package framework helps you to update system software on your PowerEdge servers in a scalable, non-intrusive way. Dell Update Package features include:
• Self-extracting files allow you to update system software including BIOS, firmware, drivers, OMSA, etc.
• Pre-installation checks for prerequisites—such as system model, OS version and dependent software—to help you avoid sequencing errors
• Intuitive dialogs to help simplify installation
• Scriptable and silent capabilities that can enable unattended installation
Dell Update Packages to update only system BIOS and firmware are supported with Citrix XenServer Dell Edition. Since the majority of Citrix XenServer Dell Edition image file system is Read-Only, driver and OMSA update packages are not supported. Dell IT Assistant can be used to execute the BIOS and firmware updates to PowerEdge servers running Citrix XenServer Dell Edition.
IPMI Baseboard Management Controller
IPMI Baseboard Management Controller (BMC) provides a standard interface for monitoring and managing Dell Servers.
IPMI Baseboard Management Controller is supported with Citrix XenServer Dell Edition. The open source ipmitool can also be used to configure BMC settings and to manage XenServer hosts using IPMI.
Deployment Toolkit
Deployment Toolkit helps provide quick and easy configuration of multiple servers from bare metal all the way through OS deployment. It also provides a framework for updating the BIOS.
Deployment Toolkit works independent of the server operating system and hence is independent of Citrix XenServer Dell Edition.
Solution Guide 21
PowerEdge Service and Diagnostics Utilities
PowerEdge Service and Diagnostics Utilities provide operating system-level diagnostics and software components to detect and resolve hardware issues.
Service and diagnostics utilities are not supported by Citrix XenServer Dell Edition. Perform offline diagnostics by downloading utilities from support.dell.com.
Using Dell OpenManage in Citrix XenServer Dell Edition environment
As explained earlier, Dell OpenManage Server Administrator components come pre-installed in Citrix XenServer Dell Edition. OpenManage Server Administrator components cannot be uninstalled or reinstalled in XenServer Dell Edition. By default, all installed OpenManage Server Administrator services start during XenServer boot.
To start, stop, restart, or check the status of OpenManage services on XenServer Dell Edition, log in to the Local Console on your host and issue the following commands respectively:
# srvadmin-services.sh start
# srvadmin-services.sh stop
# srvadmin-services.sh restart
# srvadmin-services.sh status
To disable OpenManage services so that services do not start at boot, login to XenServer local console shell and issue the following commands:
# chkconfig --level 3 dsm_om_shrsvc off
# chkconfig --level 3 instsvcdrv off
# chkconfig --level 3 dataeng off
# chkconfig --level 3 dsm_om_connsvc off
# chkconfig --level 3 ipmi off
To enable disabled OpenManage services, login to XenServer local console shell and issue the following commands:
# chkconfig --level 3 dsm_om_shrsvc on
# chkconfig --level 3 instsvcdrv on
# chkconfig --level 3 dataeng on
22 Solution Guide
# chkconfig --level 3 dsm_om_connsvc on
# chkconfig --level 3 ipmi on
To access the OpenManage Server Administrator web interface, type the following web address in a supported browser window from a client system to connect to the XenServer host:
https://<XenServer hostname/IP Address>:1311
NOTE: 1311 is the default port used by the OpenManage web server.
XenServer firewall is pre-configured to allow ports used by OpenManage Server Administrator, so no additional firewall configuration is required. If you change the default OMSA port to other than 1311, please make associated change in /etc/sysconfig/iptables and restart iptables service on XenServer host.
To check the version of OpenManage installed, execute the following command on XenServer local console shell:
$ omreport about
To view the system summary, execute the following command on XenServer local console shell:
$ omreport system summary
For more information about using OpenManage to manage Dell Servers, see the PowerEdge Documentation CD-ROM, which comes with the Dell Server and also available at support.dell.com/support/edocs/software/svradmin.
Using Dell IT Assistant in a Citrix XenServer Environment
ITA can be used for discovery, monitoring, and management of XenServer hosts. IT Assistant uses SNMP to manage Dell servers running Citrix XenServer Dell Edition.
To manage XenServer hosts using ITA, follow the steps below:
1 Specify SNMP community name: Log in to XenServer local console shell and edit the following entries in the /etc/snmp/snmpd.conf file to set your SNMP community name.
rocommunity <community name>
trapcommunity <community name>
Solution Guide 23
2 Configure SNMP traps: Configure the SNMP daemon to send SNMP trap messages to the management console. Edit /etc/snmp/snmpd.conf file and edit the following line to the end of the file:
trapsink <ITA_IP_Address> <community name>
3 After making changes to /etc/snmp/snmpd.conf, save the file and restart snmpd service:
# service snmpd restart
4 Using XenServer hostname or management IP address, perform a discovery and inventory of XenServer host in IT Assistant. IT Assistant will discover XenServer hosts and list them under the Servers category, as illustrated in Figure 10.
For more information on using IT Assistant to discover, inventory, monitor and manage Dell servers, refer to Dell OpenManage IT Assistant documentation at support.dell.com/support/edocs/software/smitasst.
Figure 10. XenServers as listed in Dell ITA console
Change Management in XenServer Dell Edition
Dell Update Packages for supported PowerEdge servers are available for download at support.dell.com. IT Assistant provides a centralized software update capability. You can load Dell Update Packages and System Update
24 Solution Guide
Sets (system bundles) into the IT Assistant repository, either from the Dell Server Updates media or from the Dell support website. Updates can also be individually applied locally on each XenServer host.
Before applying an update to the server, make sure all the VMs on the server are powered off or the host is in maintenance mode so that there are no active VMs on the XenServer host.
Use the following steps to download a DUP for XenServer host:
1 Go to support.dell.com
2 Select Drivers and Downloads
3 Select the appropriate server model (example: PowerEdge 2950) or enter the Service Tag of the server
4 For Operating System, select Red Hat Enterprise Linux 5
5 Select the desired Category and File Title, then download the Update Package for Red Hat Linux
To upgrade system/device firmware or BIOS using ITA, create a software update task:
1 Download appropriate BIN file and .sign file from the Dell support site as mentioned above
2 In ITA console, add the DUP package to the IT Assistant repository through Manage→ Software Updates tab.
3 Create a task for Software update and provide the required host details
NOTE: Make sure that on XenServer host SSH is enabled. If disabled, SSH can be
enabled from XenServer local console→ Remote Service Configuration→
Enable/Disable Remote Shell. ITA uses SSH port for installation of DUP packages.
Another method is to upload the update package to XenServer host using a file transfer program and execute the package. Perform the following as a root user:
1 Upload the update package (*.BIN file) to /var/tmp.
2 Login to XenServer local console shell.
3 Execute the command # ./<xxxx.BIN>
NOTE: xxx.BIN is the name of the DUP file.
4 Continue as per instructions provided by the update package.
Solution Guide 25
5 Reboot the server if required by the update package.
For more information on using DUP with ITA, refer to ITA user guide at support.dell.com/support/edocs/software/smitasst.
Configuring Storage
Local Storage Repository Configuration
Local storage on the XenServer host can be configured as a Storage Repository (SR) to host virtual machines.
Follow the steps below to create a new SR on local storage:
Create a new storage volume
Using the storage controller BIOS or OpenManage Server Administrator Storage Manager, create a new storage volume on the XenServer host.
Create a volume using Controller BIOS:
A new storage volume can be created from the storage controller BIOS. Please follow the appropriate set of instructions to create a new virtual disk using the controller BIOS:
• PERC 6/i:
support.dell.com/support/edocs/storage/RAID/PERC6ir/en/index.htm
• SAS 6/iR:
support.dell.com/support/edocs/storage/RAID/SAS6iR/en/index.htm
Create a volume using Dell OpenManage Server Administrator Storage
Management:
Using OpenManage Server Administrator Storage Management a new storage volume or virtual disk can be created on the disks attached to Dell PERC or SAS storage controllers, as illustrated in Figure 11. To create a new virtual disk, follow the steps available at support.dell.com/support/edocs/software/svradmin/5.4/en/omss_ug/html/index.html.
26 Solution Guide
Create Storage Repository
After creating a new storage volume/virtual disk, create a storage repository on the newly created volume, as illustrated in Figure 12. Note that creating an SR on a storage volume completely erases any data on the volume.
1 On the XenServer local console, go to Disk and Storage Repositories→Claim Local Disk as SR
2 Press <F8> to erase the disk and continue the SR creation process.
3 Select the newly created storage volume device and press <Enter> to create an SR.
4 Press <F8> to confirm creation of SR. The process may take several minutes to complete.
5 Once the SR is created, press <F8> to reboot the server to complete SR creation process.
Figure 11. Create virtual disk using Dell Open Manage Storage Manager
Solution Guide 27
Figure 12. Create SR on the newly create virtual disk
Deploying Citrix XenServer Dell Edition with Dell Storage Arrays
This section describes reference configurations and deployment steps to use Dell PowerVault and EqualLogic storage arrays with Citrix XenServer Dell Edition.
PowerVault MD1000
The PowerVault MD1000 is capable of housing up to 15 3.5-inch disk drives. This direct-attached storage enclosure supports both Serial Attached SCSI (SAS) and Serial ATA (SATA) disk drives. Each port on the PERC 6/E enables up to three enclosures to be daisy chained to a single host. Alternatively, the enclosure’s disk drives can be split between two servers with up to eight drives assigned to one server and up to seven drives assigned to a second server. The enclosure mode switch must be set to the mode you want to use before the enclosure is powered on. Shared storage for XenServer pools is not supported in the MD1000 host-based RAID solution.
Reference Configurations
UNIF IED MODE — A unified configuration is one in which your MD1000 enclosure is connected to one host. In unified mode, your enclosure can be one of up to three enclosures daisy-chained to a single port on the PERC 6/E card in your XenServer host. See Figure 13 for an illustration of this configuration.
28 Solution Guide
Figure 13. MD1000 – Unified Mode
S INGLE HOST , SPL IT BACKPLANE — In this configuration, one host contains two PERC 6/E cards. The hard drives are split into two groups with eight drives controlled by one PERC 6/E and seven drives controlled by the other PERC 6/E. Daisy-chaining MD1000 enclosures is not supported in split mode. See Figure 14 for an illustration of this configuration.
15 drives
15 drives
15 drives
Optional two expansion enclosures
XenServer host
PERC 6/E
PowerVault MD1000 in unif ied mode
Solution Guide 29
Figure 14. MD1000 – Single Host, split backplane
DUAL HOST , SPL IT BACKPLANE — In this configuration, the hard drives are split into two groups with eight drives controlled by one XenServer host and seven drives controlled by the other XenServer host. Daisy-chaining MD1000 enclosures is not supported in split mode. See Figure 15 for an illustration of this configuration.
Figure 15. MD1000 – Dual host, split backplane
8 drives / 7 drives
XenServer host
PERC 6/E PERC 6/E
8 drives / 7 drives
XenServer host XenServer host
PERC 6/E PERC 6/E
PowerVault MD1000
30 Solution Guide
Creating an SR on MD1000
The process to create an SR on virtual disks on the MD1000 controller is the same as creating an SR on the host’s local hard disk storage. Consequently, please refer to "Local Storage Repository Configuration" on page 26 for instructions on creating an SR on the MD1000.
PowerVault MD3000
The PowerVault MD3000 RAID enclosure is designed for high availability, offering redundant access to data storage. It features dual active/active RAID controller modules, redundant power supplies, and redundant fans. The RAID enclosure is designed for high-performance environments: two-node fully redundant XenServer hosts or multi-host storage access for up to four servers. XenServer hosts connected to same RAID controller module can share storage volume on MD3000, hence virtual machines can be migrated across two hosts using XenMotion.
Built for high-performance, the PowerVault MD3000 is a modular disk storage array capable of housing up to 15 3.5-inch SAS or SATA disk drives. The PowerVault MD3000 is expandable by simply adding up to two additional PowerVault MD1000 expansion enclosures. The entire array subsystem is managed from a single, user friendly software application–known as the Modular Disk Storage Manager (MDSM)–which streamlines the management and maintenance of storage as it scales. In-band management of an MD3000 array from a XenServer host is not supported. To manage MD3000 arrays using MDSM, please install the MDSM software on a separate supported Windows or Linux management station.
Citrix XenServer Dell Edition includes the Dell RDAC multi-pathing driver that enables high availability configurations with PowerVault MD3000.
Reference Configurations
S INGLE HOST WITH A NON -REDUNDANT DATA PATH — This configuration contains one XenServer host with one SAS 5/E card and an MD3000 enclosure with one RAID controller. One port of the SAS 5/E card is connected to one MD3000 RAID controller. Two optional MD1000 enclosures can be daisy-chained to the MD3000 to provide additional storage capacity. See Figure 16 for an illustration of this configuration.
Solution Guide 31
Figure 16. MD3000 – Single Host with a Non-redundant Data Path
REDUNDANT DATA PATHS FOR S INGLE-HBA HOST SERVER — This configuration contains one XenServer host with one SAS 5/E card and an MD3000 enclosure with two RAID controllers. One port of the SAS 5/E card is connected to the first port of the MD3000 RAID controller 0, while the second port of the SAS 5/E card is connected to the first port of the MD3000 RAID controller 1. Two optional MD1000 enclosures can be daisy-chained to the MD3000 to provide additional storage capacity. See Figure 17 for an illustration of this configuration.
If a path between the host and MD3000 fails, the LUNs will failover to the other controller, thus maintaining storage access to the XenServer host.
15 drives
MD3000
MD1000
(optional)
MD1000
(optional)
Controller 0
Port 0
XenServer host
SAS 5/E
32 Solution Guide
Figure 17. MD3000 – Redundant Data Paths for Single-HBA Host Server
UP TO FOUR HOSTS WITH A NON -REDUNDANT DATA PATH — This configuration contains up to four XenServer hosts, each with one SAS 5/E card, and an MD3000 enclosure with two RAID controllers. Each of the four ports of the MD3000's RAID controllers are connected to one port of each host's SAS 5/E card. Two optional MD1000 enclosures can be daisy-chained to the MD3000 to provide additional storage capacity.
In this configuration, hosts connected to same controller can be made part of a XenServer pool. As shown in Figure 18, XenServer 1 and XenServer 2 can be part of a single pool, sharing LUNs controlled by MD3000 controller 1. Similarly XenServer 3 and XenServer 4 can be part of another pool sharing LUNs controlled by MD3000 controller 0. If a path of controller fails, no failover is available in the configuration.
15 drives
Co ntroller 0
Po rt 0
Contro ller 1
Port 0
XenServer host
SAS 5/E
Port 1
SAS 5/E
Port 0
MD3000
MD1000
(optional)
MD1000
(optional)
Solution Guide 33
Figure 18. MD3000 – Up to four Hosts with a Non-redundant Data Path
REDUNDANT DATA PATHS FOR TWO S INGLE-HBA HOST SERVERS — This configuration contains two XenServer hosts, each with one SAS 5/E card, and an MD3000 enclosure with two RAID controllers. The first port of each host's SAS 5/E card is connected to the first MD3000 RAID controller, while the second port of each host's SAS 5/E card is connected to the second MD3000 RAID controller. Two optional MD1000 enclosures can be daisy-chained to the MD3000 to provide additional storage capacity. See Figure 19 for an illustration of this configuration.
In this configuration, XenServer 1 and XenServer 2 can be part of a single pool, sharing LUNs controlled by either MD3000 controller 0 or controller 1. If a path between the host and MD3000 fails, the LUNs will failover to the other controller, thus maintaining storage access to the XenServer hosts.
LAN
S AS 5/E SAS 5/E
C on trolle r 1
Po rt 1
C on trolle r 1
P ort 0
C on tro lle r 0
Po rt 1
C on tro lle r 0
P ort 0
XenServer 1 XenServer 2
SAS 5/E SAS 5/E
XenServer 3 XenServer 4
MD3000
MD1000
(optional)
MD1000
(optional)
34 Solution Guide
Figure 19. MD3000 – Redundant Data Paths for Two Single-HBA Host Servers
REDUNDANT DATA PATHS FOR TWO DUAL-HBA HOST SERVERS — This configuration contains two XenServer hosts, each with two SAS 5/E cards, and an MD3000 enclosure with two RAID controllers. The first SAS 5/E card of each host is connected to the first MD3000 RAID controller, while the second SAS 5/E card of each host is connected to the second MD3000 RAID controller. Two optional MD1000 enclosures can be daisy-chained to the MD3000 to provide additional storage capacity. See Figure 20 for an illustration of this configuration.
SAS 5/E
Port 1
Con troller 1
Port 1
Controller 1
Po rt 0
Contro ller 0
P ort 1
Contro ller 0
Port 0
15 drives
XenServer host 1 XenServer host 2
MD3000
MD1000
(optional)
MD1000
(optional)
SAS 5/E
Port 0
SAS 5/E
Port 1
SAS 5/E
Port 0
Solution Guide 35
In this configuration, XenServer 1 and XenServer 2 can be part of a single pool, sharing LUNs controlled by either MD3000 RAID controller 0 or controller 1. If a path between the host and MD3000 fails, the LUNs will failover to the other controller, thus maintaining storage access to the XenServer hosts.
Figure 20. MD3000 – Redundant Data paths for two dual-HBA host servers
REDUNDANT DATA PATHS FOR DUAL-HBA HOST SERVER, TWO SAS
L INKS — This configuration contains one XenServer host with two SAS 5/E cards and an MD3000 enclosure with two RAID controllers. One port of the first SAS 5/E card is connected to the first MD3000 RAID controller, while
SAS 5/E SAS 5/E
Controller 1
Port 1
C ontrolle r 1
Port 0
Controller 0
Po rt 1
Contro ller 0
Port 0
SAS 5/E SAS 5/E
15 drives
XenServer host 1 XenServer host 2
MD3000
MD1000
(optional)
MD1000
(optional)
36 Solution Guide
one port of the second SAS 5/E card is connected to the second MD3000 RAID controller. Two optional MD1000 enclosures can be daisy-chained to the MD3000 to provide additional storage capacity. See Figure 21 for an illustration of this configuration.
If a path between the host and MD3000 fails, the LUNs will failover to the other controller, thus maintaining storage access to the XenServer host.
Figure 21. MD3000 – Redundant Data paths for dual-HBA host server, two SAS links
15 drives
Controller 0Controller 1
XenServer host
SAS 5/E SAS 5/E
MD3000
MD1000
(optional)
MD1000
(optional)
Solution Guide 37
REDUNDANT DATA PATHS FOR DUAL HBA HOST SERVER, FOUR SAS
L INKS — This configuration contains one XenServer host with two SAS 5/E cards and an MD3000 enclosure with two RAID controllers. One port of the first SAS 5/E card is connected to the first port of the MD3000 RAID controller 0, and the other port of the first SAS 5/E card is connected to the first port of the MD3000 RAID controller 1. One port of the second SAS 5/E card is connected to the second port of the second MD3000 RAID controller 0, and the other port of the second SAS 5/E card is connected to the second port of the MD3000 RAID controller 1. Two optional MD1000 enclosures can be daisy-chained to the MD3000 to provide additional storage capacity. See Figure 22 for an illustration of this configuration.
If a path between the host and MD3000 fails, the LUNs will failover to another available path, thus maintaining storage access to the XenServer host.
38 Solution Guide
Figure 22. MD3000 – Redundant data paths for dual HBA host server, four SAS links
Creating an SR on MD3000
1 Follow the steps below to create a Storage Repository on a storage volume on PowerVault MD3000.Create and configure a virtual disk using the MD3000 Modular Disk Storage Manager software (installed on your management station). Make sure the newly created virtual disk is controlled by the RAID controller to which your XenServer hosts have access.
2 If configuring storage on MD3000 for a XenServer pool, add all hosts to the XenServer pool.
15 drives
Con troller 1
Port 1
C ontrolle r 1
Port 0
Controller 0
Port 1
C ontroller 0
P ort 0
XenServer host
SAS 5/E SAS 5/E
MD3000
MD1000
(optional)
MD1000
(optional)
Solution Guide 39
3 On the XenServer host or pool master, identify the disk id of the MD3000 storage volume:
Make sure the storage volume is visible to XenServer host:
# mppBusRescan
Issue the following command to get the SCSI device name for the storage volume:
# /opt/mpp/lsvdev
The output of this command will be similar to one below:
[root@xs1~]# /opt/mpp/lsvdev
Array Name Lun sd device
-------------------------------------
MD3000_Array1 0 -> /dev/sdc
Note the SCSI device name (/dev/sdX) and find the corresponding disk id in the output of the following command:
# ls -ltr /dev/disk/by-id
The output of this command will be similar to one below:
scsi-36001c23000c967da00000bae47ecaeeb -> ../../sdc
4 Execute one of the following commands to create a storage repository on the MD3000 virtual disk:
If adding storage to a pool:
# xe sr-create content-type=user name-label=<label_of_SR> shared=true type=lvmohba device-config:device=/dev/disk/by-id/<disk_id>
If adding storage to a stand-alone host:
# xe sr-create content-type=user name-label=<label_of_SR> type=lvmohba device-config:device=/dev/disk/by-id/<disk_id>
NOTE: <disk_id> is the id noted in step 3.
40 Solution Guide
PowerVault MD3000i
The Dell PowerVault MD3000i storage solution consists of a standard or high availability configuration. The standard model has a single controller with two 1GbE ports. It can be deployed to support up to 16 hosts non-redundantly. The high availability model has dual controllers with two 1GbE ports per controller for a total of four 1GbE ports. The dual controller option can connect up to 16 fully redundant hosts.
The entire MD3000i array is managed from a single, user friendly software application - known as the Modular Disk Storage Manager (MDSM) - which streamlines the management and maintenance of storage as it scales. In-band management of an MD3000i array from a XenServer host is not supported. To manage MD3000i arrays using MDSM, please install the MDSM software on a separate supported Windows or Linux management station.XenServer Dell Edition comes preinstalled with the open-iSCSI initiator that can be used to connect to MD3000i. Alternatively, a software iSCSI initiator inside a virtual machine can be used to add a storage volume from MD3000i array. By default, the physical network interface on which the XenServer management interface is configured is chosen to route the IP storage traffic. However, a physical interface or a bond of multiple interfaces can be configured to segregate storage traffic from management traffic. Please refer to "IP storage traffic segregation and high availability configuration" on page 66 for more details on specific configuration steps.
NOTE: For specific steps on using MD3000i with an iSCSI initiator running inside a
virtual machine, refer to Software Installation section of MD3000i Systems
Installation Guide.
Reference Configurations
This section describes reference XenServer configurations with MD3000i using the software iSCSI initiator pre-installed in XenServer Dell Edition. Multi-pathing is currently not supported on XenServer hosts connected to PowerVault MD3000i. A XenServer host or a pool of XenServer hosts can only access storage LUNs that are controlled by a single MD3000i RAID controller. Controller failover is not supported at this time.
ONE OR TWO D IRECT -ATTACHED SERVERS , S INGLE -PATH DATA , S INGLE
CONTROLLER (S IMPLEX ) — This configuration contains one or two XenServer hosts, each with one NIC for iSCSI connectivity, and an MD3000i enclosure with one RAID controller. One port of each host's NIC is connected to one
Solution Guide 41
port on the MD3000i RAID controller. Two optional MD1000 enclosures can be daisy-chained to the MD3000i to provide additional storage capacity. XenServer hosts 1 and 2 as shown in Figure 23 cannot be a part of a pool.
Figure 23. MD3000i – One or Two Direct-Attached Servers, Single-Path Data, Single
Controller (Simplex)
UP TO FOUR D IRECT -ATTACHED SERVERS , S INGLE-PATH DATA , DUAL
CONTROLLERS (DUPLEX ) — This configuration contains up to four XenServer hosts, each with one NIC for iSCSI connectivity, and an MD3000i enclosure with two RAID controllers. One port of each host's NIC is connected to one of the four ports across the two MD3000i RAID controllers. Two optional MD1000 enclosures can be daisy-chained to the MD3000i to provide additional storage capacity. See Figure 24 for an illustration of this configuration.
Pool configuration is not possible in this configuration.
LAN
NIC0 NIC0
Co ntroller 0
Po rt 1
C ontroller 0
P ort 0
XenServer host 1 XenServer host 2
MD3000i
MD1000
(optional)
MD1000
(optional)
42 Solution Guide
Figure 24. MD3000i – Up to Four Direct-Attached Servers, Single-Path Data, Dual
Controllers (Duplex)
UP TO 16 SAN-CONF IGURED SERVERS , S INGLE -PATH DATA , S INGLE CONTROLLER
(S IMPLEX ) — This configuration contains up to 16 XenServer hosts, each with one or two NICs (bonded to provide high availability) for iSCSI connectivity and an MD3000i enclosure with one RAID controller. One or two of each host's NICs are connected to an Ethernet switch, and the Ethernet switch has a connection to the I/O port on the MD3000i RAID controller. Two optional MD1000 enclosures can be daisy-chained to the MD3000i to provide additional storage capacity.
As shown in Figure 25, XenServer hosts connected to same controller I/O port can be part of a single pool. Two NICs on the XenServer host can be bonded to provide high availability for the iSCSI storage traffic. Please refer to Best Practices section 'IP Traffic Segregation and High Availability Configuration' for more details on how to configure high availability for IP storage.
LAN
NIC0 NIC0
Co ntro ller 1
P ort 1
Co ntro lle r 1
Po rt 0
Co ntro ller 0
P ort 1
Co ntr oller 0
Po rt 0
NIC0 NIC0
XenServer 1 XenServer 2 XenServer 3 XenServer 4
MD3000i
MD1000
(optional)
MD1000
(optional)
Solution Guide 43
Figure 25. MD3000i – Up to 16 SAN-Configured Servers, Single-Path Data, Single
Controller (Simplex)
UP TO 16 SAN-CONF IGURED SERVERS , S INGLE -PATH DATA , DUAL CONTROLLERS
(DUPLEX ) — This configuration contains up to 16 XenServer hosts, each with one or two NICs for iSCSI connectivity, and an MD3000i enclosure with two RAID controllers. One or two of each host's NICs are connected to an Ethernet switch, and the Ethernet switch has one connection to each of the two MD3000i RAID controllers. Two optional MD1000 enclosures can be daisy-chained to the MD3000i to provide additional storage capacity.
As shown in Figure 26, XenServer hosts connected to same controller network port can be part of a single pool. XenServer hosts connected to Controller 0 can be part of one pool and XenServer hosts connected to Controller 1 can be part of another pool. At a given time, XenServer hosts in a pool can access LUNs controlled by either MD3000i RAID Controller 0 or Controller 1, but not both. Two NICs on the XenServer host can be bonded to provide high availability for the iSCSI storage traffic. Please refer to Best Practices section 'IP Traffic Segregation and High Availability Configuration' for more details on how to configure high availability for IP storage.
C on troller 0
P ort 0
Con troller 0
P ort 1
C on troller 0
P ort 0
MD3000i
MD1000
(optional)
MD1000
(optional)
XenServer Pool A XenServer Pool B
NIC0 NIC1 NIC0 NIC 1 NIC0 NIC 1 N IC0 NIC 1
Ethernet Switch
44 Solution Guide
Figure 26. MD3000i – Up to 16 SAN-Configured Servers, Single-Path Data, Dual
Controllers (Duplex)
Creating an SR on MD3000i
Follow the steps below to create a Storage Repository on a storage volume on PowerVault MD3000i.
1 Manually change the XenServer host's iSCSI IQN to a string of your choosing by selecting the host in XenCenter, viewing its "General" tab, and clicking the "Edit" button. The "Edit General Settings" dialog box appears, where you can modify the iSCSI IQN string.
2 Using MDSM interface, create virtual disks on MD3000i using steps described at support.dell.com/support/edocs/systems/md3000i/en/IG/PDF/IGbk0HR.pdf. Make sure that the XenServer host(s) have a physical path to the controller which owns the newly created virtual disk.
3 Run the Modular Disk Storage Manager and manually add the XenServer host(s) based on the new iSCSI IQN entered in step 1: After opening the Modular Disk Storage Manager and selecting the MD3000i storage array to be configured, select the Configure tab.
NOTE: In the examples to follow the Storage array “sg23_training” is an
MD3000i with virtual disks already configured using the Create Virtual Disks
selection. The new server being added to an existing host group is named
“Valhalla”.
Co ntroller 1
Port 0
C on troller 0
P ort 0
XenServer Pool A XenServer Pool B
MD3000i
MD1000
(optional)
MD1000
(optional)
NIC0 NIC1 NIC0 NIC 1 NIC0 NIC 1 N IC0 NIC 1
Ethernet Switch
Solution Guide 45
4 From the Configure tab, select Configure Host Access (Manual). Enter the host name for the server which has XenServer software installed. Select Linux as the host type.
5 From the next screen, specify the iSCSI Initiator by selecting the New button (lower right on screen). On the Enter New iSCSI Initiator screen enter the XenServer iSCSI initiator name configured in step 1. The label is auto-populated from the server name. See Figure 27.
Figure 27. iSCSI Initiator Window
6 Host Group configuration starts from the subsequent screen titled "Configure Host Access (Manual) - Specify Host Group". If provisioning storage as shared storage for a XenServer pool, a host group must be defined so the MD3000i storage subsystem has a configured iSCSI path to each of the hosts.
7 Select “Yes: This host will share access to the same virtual disks with other hosts” and determine which of the following two options applies to your host group:
a If a new host group is desired, select the radio button for that option and enter a name for your host group using standard host naming conventions (i.e. no spaces, etc.).
b Should you already have one or more host groups created, select the radio button enabling selection from a drop down list of existing host groups. This option is to be used when configuring the second, third, etc. host in a group. Once the host group is selected previously configured hosts for that host group will be displayed. Note that
46 Solution Guide
these are shown as Linux hosts even though they are configured as XenServer hosts.
8 Selecting Next provides a Confirmation screen in which the new server being configured is shown and the other previously configured associated hosts are named. For the first server configured in a new host group there will be no associated hosts listed under the Associated host group (see Figure 28).
Figure 28. Modular Disk Storage Manager – Configure Tab
9 Select Finish confirming the new host definition. This initiates the wizard configuration of the new host.
10 On completion, select Yes to proceed to the next host you wish to configure, or select No to end the configuration wizard.
11 Return to XenCenter and create a new Storage Repository by connecting to the desired XenServer host and clicking on its Storage tab. Click Add, choose the iSCSI radio button for Virtual disk storage, and click Next.
Solution Guide 47
12 Enter the name for the new SR in the name field, MD3000i controller portal IP address in the target host field, and 3260 in the port filed. Then click on "Discover IQNs" and "Discover LUNs" to populate the "Target IQN" and "Target LUN" fields. Select the appropriate LUN and click "Finish" to create a new SR.
PowerVault NX1950
The Dell PowerVault NX1950 is a network-attached storage solution running the Microsoft Windows Unified Data Storage Server (WUDSS) operating system and is pre-configured with a PowerVault MD3000 Storage Array. The NX1950 Integrated can be deployed in two configurations: Basic, with one data-mover, or High Availability, with two data-movers. The PowerVault NX1950 integrates seamlessly with key NFS capabilities, including identity management, file permission mapping and symbolic links. For information on storage configuration of NX1950, refer to the NX1950 Deployment Guide at support.dell.com/support/edocs/software/PVNX1950/en/dg/dg_en.pdf.
Additional details on NFS configuration may be found on the Microsoft website at technet2.microsoft.com/windowsserver/en/library/ecb3045e-6d7b-4ce3-9e9c-d311d922713a1033.mspx.
Reference Configurations
This section describes the supported XenServer configurations with PowerVault NX1950. Dell supports NX1950 Integrated basic and HA configurations with Citrix XenServer software using only the NFS protocol and not iSCSI. Citrix XenServer Dell Edition supports version 3 of the NFS protocol.
S INGLE NODE OR CLUSTER CONF IGURAT ION — This configuration contains up to 16 XenServer hosts, each with a NIC for storage traffic, connected to NX1950 basic (single NX1950 node) or NX1950 HA (two NX1950 nodes configured as a cluster). Each host's NIC is connected to an Ethernet switch, and the Ethernet switch has connections to each NX1950 NIC. Two optional MD1000 enclosures can be daisy-chained to the MD3000i to provide additional storage capacity. Storage Repositories on a XenServer host or a XenServer pool can be created on the NFS share on NX1950. See Figure 29 for an illustration of this configuration.
48 Solution Guide
Figure 29. NX1950 – Single Node or Cluster configuration
S INGLE NODE OR CLUSTER CONF IGURAT ION WITH TEAMED NICS — This configuration contains up to 16 XenServer hosts, each with one or two (bonded) NIC ports for NFS traffic connectivity, connected to NX1950 basic (single NX1950 node) or NX1950 HA (two NX1950 nodes configured as a cluster). The NX1950 node contains one or two teamed NICs to server NFS traffic. All of each host's NIC ports are connected to a single Ethernet switch, and the Ethernet switch has connections to each NX1950 NIC port. Two optional MD1000 enclosures can be daisy-chained to the MD3000 to provide additional storage capacity. Storage Repositories on a XenServer host or a XenServer pool can be created on the NFS share on NX1950. See Figure 30 for an illustration of this configuration.
LAN
NIC0 NIC0
NX1950
(optional redundant node)
MD1000
(optional)
MD1000
(optional)
NIC1:0 NIC0:0
Ethernet Switch
XenServer host XenServer host
MD3000
Solution Guide 49
Figure 30. NX1950 – Single Node or Cluster configuration with teamed NICs
S INGLE NODE OR CLUSTER CONF IGURAT ION WITH TEAMED NICS AND REDUNDANT
SWITCHES — This configuration contains up to 16 XenServer hosts, each with two bonded NIC ports for storage connectivity, two Ethernet switches, and a single node or dual node NX1950 with two teamed NIC ports for serving NFS traffic connectivity. The first of each host's NIC ports is connected to one Ethernet switch, while the second of each host's NIC ports is connected to the second Ethernet switch. The two Ethernet switches are connected to each other. The Ethernet switches have connections to each NX1950 NIC port. Two optional MD1000 enclosures can be daisy-chained to the MD3000 to provide additional storage capacity. Storage Repositories on a XenServer host or a XenServer pool can be created on the NFS share on NX1950. See Figure 31 for an illustration of this configuration.
LAN
NIC1:1 NIC0:0NIC0:1NIC1:0
NIC0 NIC1 NIC0 NIC1
Single Ethernet Switch
XenServer host XenServer host
NX1950
(optional redundant node)
MD1000
(optional)
MD1000
(optional)
MD3000
50 Solution Guide
Figure 31. NX1950 – Single Node or Cluster configuration with teamed NICs and
redundant switches
Creating an SR on NX1950
1 Setup your NX1950 as a NFS server and create a NFS share. View the Microsoft Services for NFS Configuration Guide in the NX1950's Windows Unified Data Storage Server environment. This document includes instructions on how to use the Identity Mapping Setup wizard. After completing the initial configuration of Microsoft Services for NFS, you can use the Microsoft Services for NFS console for maintenance and administration of the NFS.
2 Run XenCenter and create an SR by connecting to the desired XenServer host (or pool master, if adding storage to a pool) and clicking on its Storage tab. Click the "Add…" button, choose the NFS radio button for Virtual disk storage, and select Next. Enter the name for the SR in the name filed and NFS share in the Share Name field. Click the "Scan" button to scan for any existing SRs on the share. Otherwise click "Finish" button to create a new SR.
LAN
Redundant Ethernet
Switches
NIC1:1 NIC0:0NIC0:1NIC1:0
NX1950
(optional redundant node)
MD1000
(optional)
MD1000
(optional)
MD3000
ISL
NIC0 NIC1 NIC0 NIC1
XenServer host XenServer host
Solution Guide 51
Dell EqualLogic PS Series
Dell EqualLogic PS Series iSCSI arrays simplify storage deployment by offering high performance, reliability, intelligent automation, and seamless virtualization of a single pool of storage. Each PS Series array supports up to 16 SAS or SATA disk drives. Performance and capacity may easily be scaled by adding additional PS Series arrays. All PS Series models have dual active/passive controllers with three 1GbE ports per controller, for a total of six 1GbE ports, and may be connected to up to 512 hosts.
By default, the physical network interface on which the XenServer management interface is configured is chosen to route the IP storage traffic. However, a different physical interface or a bond of multiple interfaces can be configured to segregate storage traffic from management traffic. Please refer to Best Practices section 'IP Traffic Segregation and High Availability Configuration' for more details on specific configuration steps.
Reference Configurations
BASIC CONF IGURAT ION — This configuration contains XenServer hosts, each with a NIC for iSCSI connectivity and a PS Series enclosure with redundant RAID controllers. Each host's NIC is connected to an Ethernet switch, and the Ethernet switch has connections to all six ports on the RAID controllers. See Figure 32 for an illustration of this configuration.
Storage Repositories on a XenServer host or a XenServer pool can be created on the iSCSI volumes on the EqualLogic array.
52 Solution Guide
Figure 32. PS Series – Basic Configuration
MULT I-HOST , H IGHLY AVAILABLE — This configuration contains XenServer hosts, each with two NICs (bonded to provide high availability) for iSCSI connectivity and a PS Series enclosure with redundant RAID controllers. Two of each host's NICs are connected to an Ethernet switch, and the Ethernet switch has connections to all six ports on the RAID controllers. See Figure 33 for an illustration of this configuration.
Storage Repositories on a XenServer host or a XenServer pool can be created on the iSCSI volume on the EqualLogic array. NIC0 and NIC1 on each XenServer host are bonded to provide high availability for the iSCSI storage traffic. This configuration provides redundancy against host NIC and array controller failure.
XenServer host XenServer host
NIC0NIC0
LAN
Ethernet Switch
Dell EqualLogicPS5000 series array
Solution Guide 53
Figure 33. PS Series – Multi-Host, Highly Available
FULLY REDUNDANT — This configuration contains XenServer hosts, each with two NICs (bonded to provide high availability) for iSCSI connectivity and a PS Series enclosure with redundant RAID controllers. One of each host's NICs is connected to Ethernet switch 0, while the second of each host's NICs is connected to Ethernet switch 1. The two Ethernet switches are connected by two Inter Switch Links (ISLs). Ethernet switch 0 has connections to two ports on RAID controller 0 and one connection to RAID controller 1. Ethernet switch 1 has connections to two ports on RAID controller 1 and one connection to RAID controller 0. See Figure 34 for an illustration of this configuration.
Storage Repositories on a XenServer host or a XenServer pool can be created on the iSCSI volumes on the EqualLogic array. NIC0 and NIC1 XenServer host are bonded to provide high availability for the iSCSI storage traffic. This configuration provides redundancy against host NIC, switch or array controller failure.
XenServer host XenServer host
LAN
Dell EqualLogicPS5000 series array
Ethernet Switch
NIC0 NIC1 NIC0 NIC1
54 Solution Guide
Figure 34. PS Series – Fully Redundant
Creating an SR on Dell PS Series Array
1 Configure the EqualLogic array, create a group, set a member RAID policy, and create a volume by following the instructions available in the PS Series Storage Arrays Quickstart guide available at www.equallogic.com/support.
2 Return to XenCenter and create an SR by connecting to the desired XenServer host (pool master if adding storage to a pool) and clicking on its Storage tab. Click the “Add…” button, choose the iSCSI radio button for Virtual disk storage, and select Next.
3 Enter the name for the new SR in the name field, EqualLogic array group IP address in the target host field, and 3260 in the port filed. Then click on "Discover IQNs" to perform target discovery. Each volume on an EqualLogic array has a unique target name with LUN ID set to "0". Select the appropriate IQN and then click on "Discover LUNs" to discover the LUN associated with the target. Select the LUN and click "Finish" to create a new SR.
Moving an SR Between Hosts
If you wish to remove an SR from an existing XenServer host and use it with a different host, perform the following steps:
XenServer host XenServer host
LAN
Redundant Ethernet Switches
Dell EqualLogicPS5000 series array
ISL
NIC0 NIC1 NIC0 NIC1
Solution Guide 55
1 Power down the VMs and export their metadata by following the instructions in "Export the VM Metadata" on page 74.
2 Select the SR in XenCenter and click Detach.
3 Attach the SR to the second host by using XenCenter. If the SR resides on an MD3000, follow the instructions in "Attach the SR Snapshot to Secondary XenServer Host" on page 74.
4 Import the VM metadata by following the instructions in "Import VM Metadata to Recover VMs From A Snapshot" on page 75.
Resizing An SR After Changing the Size of An LVM-Based Storage Volume
If you change the size of a LVM-based volume (SAS, iSCSI or Fibre Channel), say due to need for more storage for virtual machines you increase the size of the volume on your iSCSI storage array, the size of the Storage Repository on that volume does not get updated. Perform the following steps to adjust the size of the SR:
1 Shutdown all virtual machines on the SR
2 Note the UUID of the Storage Repository. This can be done using the xe sr-list command on the XenServer host and identifying the SR by its name label.
3 Identify the Physical Block Device (PBD) UUID corresponding to the SR. This can be done by executing the following command on XenServer:
# xe sr-param-list uuid=<SR UUID>|grep PBD
where <SR UUID> is the UUID of SR noted in step 2.
4 Unplug the Physical Block Device (PBD) corresponding to the Storage Repository.
# xe pbd-unplug uuid=<PBD UUID>
where <PBD UUID> is the UUID of PBD noted in step 3.
5 Plug the PBD.
# xe pbd-plug uuid=<PBD UUID>
6 Find the physical volume device SCSI device mapping name on which the SR exists.
56 Solution Guide
Identify the volume group (VG) corresponding to the SR.
Issue the following command on XenServer host:
# pvs
The output of this command should be similar to one below:
PV VG Fmt Attr PSize PFree
/dev/sdd VG_XenStorage-058e9a1d-9b7e-71bc-7a4c-5b78d6e30bcb lvm2 a- 80.00G 38.00G
/dev/sde VG_XenStorage-4684b6c6-be6d-6267-b7b5-834a1fd30f65 lvm2 a- 59.99G 45.99G
The volume groups (VG) are named as VS_XenStorage-<SR UUID>. Using the SR UUID noted in step 2, identify the correct volume group and the corresponding Physical Volume (PV) from the output of the above command.
7 Resize the Physical Volume:
# pvresize /dev/sd<x>
8 Scan the Storage Repository:
# xe sr-scan uuid=<SR UUID>
9 The SR size now gets updated to the new size of the physical volume.
10 Power on the Virtual Machines.
Recovering the XenServer Host and VMsWe recommend that, whenever possible, you leave the installed state of XenServer Dell Edition servers unaltered. That is, treat XenServer hosts as if they are appliances and do not install any additional packages or start additional services on them.
XenServer uses a per-host database to store metadata about VMs and associated resources such as storage and networking. When combined with storage repositories, this database forms the complete view of all VMs available across the pool. To make your XenServer deployment readily recoverable, be sure to back up this database regularly.
Solution Guide 57
Database Backup From XenServer Local Console
For the standard single XenServer case, you can back up the database using the XenServer local console.
To back up the XenServer database with the XenServer local console:
1 Insert a removable media device (USB stick) to save the backup.
2 Select Backup, Restore and Update from the XenServer local console menu and press ENTER.
3 Select Backup Server State and press ENTER.
You will be prompted to select which removable media device you want to write the backup to. Select the appropriate device and press ENTER.
4 You will be prompted to provide a filename for the backup. Type the desired filename and press ENTER.
The backup file will be written to the selected media.
5 Remove the media with the backed-up file and store it in a safe place for possible later use to recover XenServer state.
Backing up the database for single-server and pooled servers using XenCenter or the XenServer CLI is described in Appendix B of the XenServer Installation Guide at www.citrix.com/xenserver/dell.
Using the Recovery CD
In case the flash storage becomes corrupted or fails and needs to be replaced, you can recover the factory-installed XenServer image by using the Recovery CD, which can be downloaded from www.citrix.com/xenserver/dell. After recovering the factory-default image, you can restore the XenServer database and resume running your VMs.
To recover the Dell Edition using Recovery CD:
1 Burn the downloaded Recovery CD ISO to a CD disc or attach the ISO to the host using DRAC virtual media option.
2 Insert the new flash storage in the server.
3 Boot the server from the CD.
4 The Recovery CD will prompt you to copy the image to the flash storage. It will then copy the factory default image to the device.
58 Solution Guide
5 When complete, remove the Recovery CD, reboot the server, and boot from the internal flash storage.
Restoring XenServer Database
After restoring the XenServer to the factory-default configuration, restore the backed-up XenServer database to regain your VMs and particular network and storage configuration. This process differs depending on whether you have a single host or a resource pool, and whether the restored XenServer is to be a member of the pool or a master. For the default case, you can use the XenServer local console.
To restore the XenServer database with the XenServer local console:
1 Insert the removable media device with the backup (USB stick).
2 Select Backup, Restore and Update from the local console menu and press ENTER.
3 Select Restore Server State from Backup and press ENTER.
You will be prompted to select which removable media device you want to restore the backup from. Select the appropriate device and press ENTER.
4 You will be prompted to select the backup file to restore from. Select the desired filename and press ENTER.
The contents of the backup file will be restored to the XenServer media.
5 Remove the media with the backed-up file and store it in a safe place for possible later use.
6 When complete, reboot the server.
If you were previously using a remote database, it must be set up once again. Ensure you use the same database. Follow instructions in "Setup of Remote Database on iSCSI LUN for Diskless Hosts" on page 69.
For full details on restoring the database for pooled XenServers, see the section "Backing up and restoring XenServer hosts and VMs" in Appendix B of the XenServer Installation Guide at www.citrix.com/xenserver/dell.
Solution Guide 59
Recovering XenServer After a Board Replacement
In the event of a motherboard change in your XenServer host, the system Service Tag must be reset to its original value for XenServer to function. Download the latest asset.com utility from ftp.us.dell.com/utility. From a DOS prompt, execute asset.com /s <service tag>.
Resetting the Root Password
If the root password on the XenServer host is lost or forgotten, it may be reset. Boot the host to media for a supported Linux distribution. At a console, perform the following steps:
1 mount LABEL=xe-state /mnt
2 Edit the /mnt/xe-*/etc/passwd file using a text editor such as vi.
a Erase the characters on the first line after root, between the first and second colons.
b Type a new temporary password.
c Save and close the file.
3 umount /mnt
4 Reboot the server.
5 Change the password in the XenServer local console Change Password menu.
Recovering Virtual Machines in a XenServer Pool
In most typical environments two or more XenServer hosts are grouped together in resource pools. This approach offers a number of advantages such as centralized infrastructure management and VM failover. Virtual machines can be located on remote, shared storage to allow them to run on any physical server within the resource pool.
If XenServer host maintenance is planned, VMs can be live migrated between servers with no service downtime using XenMotion. However, if a physical XenServer host fails, there are a number of steps that need to be completed depending on the type of failure. Figure 35 displays an overview of the corrective action that needs to be taken to recover virtual machines.
60 Solution Guide
Figure 35. Virtual Machine recovery flowchart
STEP 1 – IS THE POOL MASTER DOWN?
All commands within a XenServer pool are managed by the server known as the “pool master”. If the server that is currently designated as the pool master has failed, servers running on other hosts will continue to run but lifecycle management (e.g. starting/stopping VMs) within the pool will not be possible.
NOTE: Under some circumstances (such as power outages) there may have been
multiple XenServer host failures in the pool. The pool master should be the first
server to concentrate the recovery effort on.
If connection to the XenServer pool is not possible from XenCenter, or issuing commands at the command line of a surviving host (such as xe host-list) returns a message like “Cannot perform operation as the host is running in emergency mode” then the Master server cannot be contacted and is likely dead.
NOTE: If the Master Server has NOT failed, skip forward to Step 3 – Verifying which
XenServer failed.
STEP 2 – RECOVER POOL OPERAT IONS ( I F APPL ICABLE )
If recovery of the pool master is not possible within a short timeframe, it will be necessary to promote a slave server to a master to recover pool management and restart any failed Virtual Machines.
Solution Guide 61
1 Select a surviving XenServer within the pool that will be promoted (each slave server has a copy of the management database so a server selected at random is normally sufficient).
2 From the server’s command line, issue the following command:
# xe pool-emergency-transition-to-master
3 Once the command has completed, recover connections to the other slave servers using the following command:
# xe pool-recover-slaves
4 Verify that pool management has been restored by issuing a test command at the CLI (e.g. xe host-list).
Full details of how to recover from pool failures are available at: docs.xensource.com/XenServer/4.0.1/reference/ch02s06.html
STEP 3 – VER IFY WHICH XENSERVER (S ) FA ILED
This step verifies which XenServer(s) in the pool have failed and gives the chance to note the UUIDs of the failed servers for use later.
1 Issue the following command at the Command line of a surviving pool member:
# xe host-list params=uuid,name-label,host-metrics-live
uuid ( RO) : 3125fef0-e5d1-47a8-909c-2b3812478541
name-label ( RO): TRNSRV-C-08
host-metrics-live ( RO): false
uuid ( RO) : 7ca28816-3b28-4035-b29d-3ba1129a402c
name-label ( RO): TRNSRV-C-07
host-metrics-live ( RO): true
2 Any servers listed as host-metrics-live = false have failed. If it was necessary to recover the Pool Master with "Step 2 – Recover Pool operations (if applicable)" on page 61, the Master will now show as true. Note the first few characters of the UUID of any failed servers (or the
62 Solution Guide
UUID of the Master server if it originally failed). It is not necessary to note the whole UUID because tab-completion within the CLI can be used in the next step.
NOTE: Wherever possible, visually verify that the server has failed. If a failure
has occurred in another subsystem on the server (e.g. xapi service has failed)
and virtual machines are still active, starting the VMs again could cause data
corruption.
STEP 4 – VER IFY WHICH V IRTUAL MACHINES HAVE FA ILED
Regardless of whether a Master or Slave server fails within a resource pool, virtual machines running on other hosts will continue to run. The next step verifies which VMs were running on the failed server(s).
1 Issue the following command at the CLI of a surviving server. Use the UUIDs noted in Step 3, type the first 3 digits of the UUID, and press [tab] to complete it:
# xe vm-list is-control-domain=false resident-on=<UUID_of_failed_server>
uuid ( RO) : 0106cb35-e809-f598-3d4b-16be79ec1a27
name-label ( RW): CentOS 5.0 (1)
power-state ( RO): running
uuid ( RO) : c7a0282c-3de2-9e21-4435-4cc49f7556eb
name-label ( RW): SUSE Linux Enterprise Server 10 SP1
power-state ( RO): running
2 Note that the power state of these VMs is still “running” even though the server they were running on has failed. The next step will reset this value to allow the VMs to be launched on a surviving pool member.
3 Repeat this step if necessary for any other failed servers.
Solution Guide 63
STEP 5 – RESET POWER STATE ON FA ILED VMS
In order to restart VMs after a host failure, it is necessary to reset their power state.
1 Issue the following command at the command line of a surviving server:
# xe vm-reset-powerstate resident-on=<UUID_of_failed_server> --force --multiple
NOTICE: Using the --multiple option may result in ALL virtual machines within
the pool being reset. Be careful to use the “resident-on” parameter correctly.
Alternatively, go back to "Step 4 – Verify which Virtual Machines have failed"
on page 63 and note down the UUID of each VM and run the following
command to reset each VM individually:
# xe vm-reset-powerstate uuid=<uuid_of_VM> --force
2 Verify that there are no VMs still listed as resident on the failed server by repeating step 4. The vm-list command should now return no results.
More information on resetting a VMs power state is available at docs.xensource.com/XenServer/4.0.1/reference/ch05s04.html#cli-xe-commands_vm-reset-powerstate.
STEP 6 – RESTART VMS ON ANOTHER XENSERVER
1 Load XenCenter and verify that each VM that was originally running on the failed server is now marked as halted (Red icon next to VM).
2 Restart each VM on a surviving pool member as normal.
NOTE: It is only possible to restart virtual machines on a surviving pool
member if they are located on a remote shared storage repository. If one or all
on the virtual disk files of a VM are located on the local hard disk of the failed
server, it will not be possible to start them without recovering the failed server.
Updating the XenServer Image to A New VersionThe XenServer Dell Edition also differs from the standard editions in the way that updates to drivers and other components are handled. There is no provision to individually load updated drivers. You can only apply update packages provided by Citrix.
64 Solution Guide
Updates are provided as downloads at www.citrix.com/xenserver/dell. When an update is available, download the update package and save it on a removable storage device such as a USB stick.
To install updates using the XenServer local console:
1 Select Backup, Restore and Update from the local console menu.
2 Select Apply Update and follow the prompts.
3 Reboot the server.
To install updates using XenCenter, refer to the Updating servers topic in XenCenter Help.
To install updates using the XenServer CLI, see the section "Applying updates using the CLI" in Appendix B of the XenServer Installation Guide at www.citrix.com/xenserver/dell.
Solution Guide 65
Best Practices
IP storage traffic segregation and high availability configuration
XenServer hosts automatically manage NICs as needed based on the related network, VIF, PIF, and bond configuration. In the default XenServer networking configuration, all network traffic to IP-based storage devices occurs over the physical network interface used for the management interface. To dedicate a NIC for IP storage traffic, it must be configured as unmanaged, and the NIC can no longer be used for XenServer networks.
To dedicate a NIC to storage traffic:
1 Use the pif-list command to determine which PIF corresponds to the NIC to be dedicated to storage traffic.
2 Use the pif-forget command to mark the NIC as unmanaged. The NIC will remain unmanaged across host restarts.
3 Perform any required IP address configuration and enable the NIC using standard Linux commands. See example below.
4 Once storage resources are available via the NIC, create the appropriate storage repositories.
To change an unmanaged NIC to managed, use the pif-introduce command.
NOTE: Use of the pif-scan command will reset all NICs on the host to
managed.
The sample steps below explain how to use 2 NICs configured as a bond for IP storage traffic.
1 Use pif-list command to identify which PIFs you would like to dedicate for storage traffic:
# xe pif-list
uuid ( RO) : ba663036-6586-1c29-e1f0-7f29aa624d10
device ( RO): eth3
currently-attached ( RO): true
VLAN ( RO): -1
66 Solution Guide
network-uuid ( RO): 8b1f0c1e-49e4-a0d3-cdc1-092edce7e070
uuid ( RO) : 9877d0be-e8eb-8abf-fa7e-7913adb34731
device ( RO): eth2
currently-attached ( RO): true
VLAN ( RO): -1
network-uuid ( RO): bf062d4f-b56a-b4d1-237f-cb03bab6cc46
uuid ( RO) : 4ebc91bf-226a-1591-9e3c-55c275ad24ef
device ( RO): eth1
currently-attached ( RO): true
VLAN ( RO): -1
network-uuid ( RO): 8847c967-6ba7-7102-a16f-8ff7032a20ae
uuid ( RO) : 32cc189a-b127-498a-e70b-af2c7e025bf3
device ( RO): eth0
currently-attached ( RO): true
VLAN ( RO): -1
network-uuid ( RO): 9267034e-af5b-35ca-05bd-ae56470b4dc7
2 Let us choose eth2 and eth3 for storage traffic. Configure eth2 and eth3 as unmanaged devices:
Solution Guide 67
# xe pif-forget uuid=ba663036-6586-1c29-e1f0-7f29aa624d10
# xe pif-forget uuid=9877d0be-e8eb-8abf-fa7e-7913adb34731
3 Create bond device and enslave eth2 and eth3 devices:
a Create a new bond configuration file (/etc/sysconfig/network-scripts/ifcfg-bond0) and add the following:
DEVICE=bond0
IPADDR=192.168.1.1
NETMASK=255.255.255.0
NETWORK=192.168.1.0
BROADCAST=192.168.1.255
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
NOTE: Ensure that the bond interface for storage traffic is on separate
physical or logical network than management traffic. Otherwise the first
available interface in the routing table will be used for storage traffic. Modify
IPADDR, NETMAST, NETWORK and BROADCAST files as per your network
configuration.
b Modify configuration of eth2 device: open /etc/sysconfig/network-scripts/ifcfg-eth2 for editing and change the settings as below:
DEVICE=eth2
USERCTL=no
ONBOOT=yes
TYPE=Ethernet
HWADDR=00:1d:09:2d:23:a0
SLAVE=yes
MASTER=bond0
BOOTPROTO=none
68 Solution Guide
c Modify configuration of eth3 device: open /etc/sysconfig/network-scripts/ifcfg-eth3 for editing and change the settings as below:
DEVICE=eth3
USERCTL=no
ONBOOT=yes
TYPE=Ethernet
HWADDR=00:1b:21:0b:54:60
SLAVE=yes
MASTER=bond0
BOOTPROTO=none
4 Modify /etc/modprobe.d/modprobe.conf.dist to load bonding module at boot. Open /etc/modeprobe.d/modprobe.conf.dist for editing and add the following at the end of the file:
alias bond0 bonding
5 Save and close the file.
6 Load the bonding module:
# modprobe bonding
7 Bring up the bond:
# ifup bond0
8 Using XenCenter, create a new iSCSI storage repository. The newly created bond device will now be used for storage traffic.
Setup of Remote Database on iSCSI LUN for Diskless Hosts
A remote database, if configured, is used to store the XenServer's xapi agent configuration settings that are changed frequently. The normal copy of the xapi database on the server's internal flash media is updated less frequently because it is limited to decrease the number of writes to the device. If a server has a SR on the local disks, then a copy of the xapi database is stored on the local SR. There is no write limit on the database copy that resides on the server's local SR or on a remote storage volume. Configuring a remote database is particularly useful for diskless servers.To set up the remote
Solution Guide 69
database on an iSCSI LUN, use the XenServer local console menu Remote Service Configuration→ Setup Remote Database. This should be done only on the pool master, and while there are no VMs currently running on the master. The hostname, username, password and target IQN to access the iSCSI LUN are required. Setting up the remote database will destroy anything currently on the LUN. Once the remote database is set up, it is updated far more frequently than the database on the server's internal flash media. If the pool master dies or is taken away for some reason, a slave should be transitioned to be the new master, and the remote database should subsequently be set up on the new master using XenServer local console. While configuring the remote database on a new master, an existing database will be discovered and should be used.
Please note the following points:
• The remote database must never be set up on two running machines at once.
• Related to the above, when a slave has been transitioned to master, the old master must never be brought back online without removing the remote database configuration.
• A remote database is tied to a particular pool, and it is not possible to use the database on a new host that isn't in the database.
To stop using a remote database, do the following on the master's console:
# rm /etc/xensource/remote.db.conf
# cp /etc/xensource/local.db.conf /etc/xensource/db.conf
Reboot the host.
Scripted Backup of XenServer Host Database
It is strongly recommended that the XenServer host database be regularly backed up to protect against unintentional configuration loss. This operation can be performed using XenCenter. However, it is beneficial to script a recurring backup from your management system where XenCenter is installed. To accomplish this, perform the following steps from your management system:
1 Create a directory to house the backup batch file and host database backup files, e.g.:
70 Solution Guide
C:\backup
2 Create a new batch file, and paste the following into its contents:
@echo off
if "%1" == "" goto error
if "%2" == "" goto error
if "%3" == "" goto error
set hh=%time:~0,2%
if %time:~0,1%=="" set hh=0%hh:~1,1%
set timestamp=%date:~10,4%%date:~4,2%%date:~7,2%_%hh%%time:~3,2%%time:~6,2%
"C:\Program Files\Citrix\XenCenter\xe.exe" -s %1 -u %2 -pw %3 host-backup file-name=%1_%timestamp%.xbk
goto end
:error
echo usage is ^"backup ^<hostname or IP^> ^<username^> ^<password^>^"
pause
:end
exit
3 Create a Scheduled Task. In the “Run” field, enter the path to the batch file, followed by the hostname or IP address, username, and password, e.g.:
c:\backup\backup.bat 172.17.40.70 root rootpassword
For the “Start in” field, enter the directory you created in step 1.
Solution Guide 71
Using storage array snapshots in a Citrix XenServer environment
Using storage array snapshot features available on Dell PowerVault MD3000, MD3000i and Dell EqualLogic storage arrays, a point in time copy of the storage repository can be taken for tasks such as backup, data mining, reporting, testing software upgrades, etc., in the VMs. A typical operation involves taking a snapshot of the SR, exporting the virtual machine(s) metadata from primary XenServer host, attaching the snapshot to a secondary XenServer host and importing the virtual machine(s) metadata on the secondary host to recover virtual machines from the SR snapshot. A snapshot restore operation can also be performed to go back to a previous known good state.
With varying degree of impact on filesystem and application consistency and service availability, snapshot operations on a storage repository can be executed in essentially three ways:
SHUT DOWN VMS BEFORE A SNAPSHOT : — In scenarios where application consistency cannot be achieved using frameworks such as Microsoft VSS or VDS, etc., creating a snapshot/clone of an SR when virtual machines are powered off ensures a volume that can be cleanly recovered. Scenarios where all virtual disks of a VM are virtualized through the XenServer Domain0 fall into this category. Though this approach guarantees file system and application consistent volume recovery, at the same time there is service downtime as VMs on the SR need to be shutdown before a snapshot operation. This approach is useful when there are multiple VMs on an SR. Following actions are required to take a snapshot in this scenario:
1 Shut down all the virtual machines on the SR.
2 From the primary XenServer host, export VM metadata for the VMs on the SR.
3 Take a snapshot of the SR.
4 Power on the VMs on the primary XenServer host.
5 Attach the snapshot of the SR on secondary XenServer host.
6 On the secondary XenServer host, import the VM metadata to recover virtual machines.
7 Power-on the VMs on the secondary XenServer host.
72 Solution Guide
SUSPEND VMS BEFORE A SNAPSHOT : — In this scenario, the virtual machines are suspended and the suspended state is saved in the Suspend Storage Repository. After suspending the VMs on the SR, a snapshot of the SR on which VM virtual disks reside and the Suspend SR (if different from the SR on which VM virtual disks reside) can be taken to ensure file system and application consistent snapshots. When VMs are powered on from the suspended state, the VMs resume at the point where they were suspended. Depending on the memory state of the virtual machine(s) on the SR, the service downtime can be less or more than option A. This approach is useful when there are multiple VMs on an SR. Following actions are required to take a snapshot in this scenario:
1 Suspend all the virtual machines on the SR. This requires XenTools be installed and running in all VMs.
2 From the primary XenServer host, export VM metadata for the VMs on the SR.
3 Take a snapshot of the SR and the Suspend SR (if different than the SR for which snapshot was created).
4 Power -on the VMs on the primary XenServer host.
5 Attach the snapshot of the SR and Suspend SR (if different than the SR for which snapshot was created) on secondary XenServer host.
6 Import the VM metadata to recovery virtual machines.
7 Power on the VMs on the secondary XenServer host.
SNAPSHOT A VOLUME WHEN VMS ARE RUNNING : — In this scenario, a snapshot operation on a storage repository is initiated when the virtual machine is in a running state and performing I/O to the storage volume. Unlike options A and B, there is no service downtime. In situations where all VM virtual disks reside on an SR whose I/O is entirely controlled by XenServer Domain0, taking a storage volume snapshot would result in a file system and application crash consistent snapshot. In situations, such as running iSCSI initiator inside the VM to connect directly to SAN volumes, where the I/O to virtual disks is entirely controlled by the VM, administrators can use array based snapshots in conjunction with frameworks such as Microsoft VSS or VDS to create application consistent snapshots. This approach is useful when each virtual disk of a VM is a separate volume on storage array.
Solution Guide 73
Export the VM Metadata
This section explains steps required to export the virtual machine metadata from the primary XenServer host. This metadata can then be imported on the secondary XenServer host where VMs need to be recovered. Perform the following steps for each VM that resides on the SR:
1 Shutdown or suspend the VM. If the VM is suspended, snapshots will need to be taken of the Suspend SR as well as the SR for VMs.
2 Run the following command on the XenCenter management station:
# xe -s <Primary XenServer Managment IP/HostName> -u root -pw <XenServer password> vm-export metadata=true filename=<filename> vm=<vm_name>
To export metadata for multiple VMs, use the -multiple switch:
# xe -s <Primary XenServer Managment IP/HostName> -u root -pw <XenServer password> vm-export metadata=true filename=<filename> --multiple
Attach the SR Snapshot to Secondary XenServer Host
Make the snapshot volume available to the secondary XenServer host by attaching the SR. For snapshots of iSCSI volumes, just attach the SR to the XenServer host using XenCenter. If the snapshot resides on an MD3000, follow the steps below to attach the snapshot as an SR to a XenServer host.
1 Note the UUID of the Storage Repository for which the snapshot was created. This can be done using the xe sr-list command on the primary XenServer host and identifying, in the command output, the SR by its name label.
# xe sr-list
2 Run:
# xe sr-introduce content-type=user name-label=SAS_snapshot type=lvmohba uuid=<sr_uuid>
3 Identify the disk id of the storage volume:
# ls -ltr /dev/disk/by-id
4 Identify the host UUID by running:
# xe host-list
74 Solution Guide
5 Run:
# xe pbd-create device-config:device=/dev/disk/by-id/<disk_id> host-uuid=<host_uuid> sr-uuid=<sr_uuid>
6 Identify the PBD UUID by running:
# xe pbd-list
7 Run:
# xe pbd-plug uuid=<pbd_uuid>
Import VM Metadata to Recover VMs From A Snapshot
Subsequently, run the following command from the target host's CLI for each VM in the snapshot:
# xe -s <Primary XenServer Managment IP/HostName> -u root -pw <XenServer password> vm-import metadata=true filename=<filename> sr-uuid=<sr_uuid> vm=<vm_name>
NOTE: <sr_uuid> is the UUID of the SR. Both the SR and its snapshot will
have the same UUID.
To import metadata for multiple VMs, use the -multiple switch:
# xe -s <Primary XenServer Managment IP/HostName> -u root -pw <XenServer password> vm-export metadata=true filename=<filename> sr-uuid=<sr_uuid> --multiple
Adjusting SCSI timeouts to tolerate storage controller failures
To account for time that an IO may take to complete in the case where the MD3000 or MD3000i goes through exception recovery on IO, it is recommended to adjust SCSI timeouts on XenServer hosts and virtual machines that are resident on storage repositories on MD3000 or MD3000i.
Adjusting SCSI Timeouts for PowerVault MD3000
If the XenServer host is connected to MD3000, adjust the SCSI timeout on Windows VMs residing on the SR resident on MD3000 by changing the following registry setting:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\disk\TimeOutValue. The value needs to be created if not present. Set this value to 160.
Solution Guide 75
There is no need to modify SCSI timeout in paravirtualized Linux VMs as the VM virtual disks are not presented as SCSI disks.
Adjusting SCSI Timeouts for PowerVault MD3000i
If the XenServer host is connected to MD3000i, on the XenServer host, change the SCSI timeout value for the SCSI device(s) on MD3000i. Create a new udev rule file with name 96-md3000i-sto.rules at /etc/udev/rules.d/ and add the following text:
KERNEL=="sd*[!0-9]", ACTION=="add", SYSFS{model}=="MD3000i", SYSFS{vendor}=="DELL", RUN+="/bin/sh -c 'echo 160 > /sys$DEVPATH/device/timeout'"
Save the file. Attach the iSCSI volume to the XenServer host and create an SR.
Adjust the SCSI timeout on Windows VMs residing on the SR resident on MD3000i by changing the following registry setting: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\disk\TimeOutValue. The value needs to be created if not present. Set this value to 160.
There is no need to modify SCSI timeout in paravirtualized Linux VMs as the VM virtual disks are not presented as SCSI disks.
Using an iSCSI initiator inside a VM is the same as using an iSCSI initiator in a physical server. To attach MD3000i volumes to a VM using the iSCSI initiator inside the VM, refer to Software Installation section of MD3000i Systems Installation Guide for specific steps to install relevant Dell multi-pathing drivers and supported iSCSI initiators. There are no extra steps required to modify SCSI timeouts when Dell multi-pathing drivers control the LUN(s) on MD3000i array.
Configuring XenServer Management Network for High Availability
Please refer to the “Creating NIC Bonds” section of the XenServer Administrator’s Guide located at www.citrix.com/xenserver/dell for detailed instructions on configuring XenServer management network for HA.
76 Solution Guide
Appendix
XenServer Dell Edition Local Console Menu Items
This section describes details of each of the available menu options in the local console configuration utility.
The host local console can be used to configure the XenServer Dell Edition and to set up Storage Repositories for VMs. While installing, running, and managing VMs can also be accomplished by opening a command shell and using the xe command line interface, we recommend that you use the XenCenter management interface to work with VMs. Select Manage Server Using XenCenter from the XenServer local console menu for information on how to download and install XenCenter on a Windows workstation for use as the management console for the XenServer.
On the left side of the XenServer local console, there is a list of menu names or commands. Use the Up and Down keys to select from the list, and ENTER to display the menu or initiate the command.
The following sections cover the XenServer local console menus and sub-menus.
Status Display
Displays the server vendor and model, the XenServer version and build, and the configuration of the management NIC: its device name, IP address, netmask, and gateway.
Manage Server Using XenCenter
Provides a download link to get the XenCenter application from the Citrix website.
Network and Management Interface
Provides a menu of commands for setting up network configuration of the management interface.
Solution Guide 77
Configure Management Interface
Allows you to select and configure which of the NICs available on the server will be used as the XenServer management interface. Configuring the management interface is described in detail in the section named First Boot Setup.
Add/Remove DNS Servers
Allows you manage DNS servers – add additional ones, remove ones.
Network Time (NTP)
Allows you to configure network time servers for synchronizing clock time between servers. You can enable or disable NTP, and add or remove NTP servers.
Test Network
Allows you to use the Linux ping command for testing the configured network. You can ping several fixed addresses: the local address, the gateway address, an internet address. You can also choose to ping a custom address by providing an IP.
Display NICs
Displays the network interfaces on the server by description, device name, and MAC address, and shows if they are connected or not.
Authentication
Log In/Out
Provides a login prompt.
Change Password
Allows changing of the root user password. This will also change the password for local and remote login shells. If the host is in a Resource Pool, it will also change the password of the Pool master.
To change the root password, perform the following steps:
1 From the XenServer local console, select Authentication. The Authentication menu replaces the list in the left pane.
78 Solution Guide
2 Select Change Password and press Enter. A Change Password box appears.
3 Type current password, then type the desired new password and repeat it in the provided fields. Press ENTER to set the new password.
Change Auto-Logout Time
Allows changing of the auto-logout timer. Default is 5 minutes. Users are automatically logged out after there has been no keyboard activity for this time. The timeout applies to this console and to local shells started from this console.
XenServer Details and Licensing
Displays the XenServer product name, version, version of Xen, and version of the kernel, the Product SKU, Expiry date for the license, number of Sockets.
Press ENTER to see further license details and to access the Install XenServer License sub-menu.
Install XenServer License
Allows you to install a XenServer license to update to XenServer Standard or Enterprise editions.
To install a XenServer license, perform the following steps:
1 Select Install XenServer License on the left, and press ENTER.
2 Log in if prompted.
3 A dialog box appears. Select the device containing the License file and press ENTER.
Hardware and BIOS Information
System Description
Displays the server manufacturer, the System Model, the Service Tag number, and the Asset Tag number.
Processor
Displays the number of Logical CPUs, number of populated CPU sockets, the total number of CPU sockets, and the CPU description string.
Solution Guide 79
System Memory
Displays the total memory, the number of populated memory sockets, and the total number of memory sockets.
Local Storage Controllers
Lists the storage controllers on the server.
BIOS Information
Displays the system BIOS vendor and version.
BMC Information
Displays the BMC firmware version.
Keyboard and Timezone
Keyboard Language and Layout
Use this option to select the correct keyboard language and layout for your keyboard.
Set Timezone
Use this option to set the Timezone for the server.
To set the timezone, perform the following steps:
1 Select Set Timezone and press ENTER
2 Log in if prompted.
3 A dialog box appears. Select the Region and press ENTER.
4 Select a City within the Region and press ENTER.
5 A box appears with the new timezone setting and the current time. Hit ENTER to clear the message.
Disk and Storage Repositories
Displays the available disks and Storage Repositories (SRs).
Claim a Local Disk as SR
Local disks can be configured as SRs for Virtual Machines. Press ENTER to list the disks available and claim one or more for VM storage.
80 Solution Guide
NOTE: Local disks are automatically claimed for use as SRs for XenServer VMs on
first boot if the disks contain the Dell utility partition.
Specify Suspend SR
Allows you to specify which SR the suspended image of a VM will be saved to. By default, this is not configured.
Specify Crash Dump SR
Allows you to specify which SR crash dumps of VMs will be saved to. By default, this is not configured.
View Local Storage Controllers
Lists the storage controllers on the server.
Remote Service Configuration
This menu configures remote databases, access by remote secure shell (ssh) and remote logging (syslog) to remote servers.
Enable/Disable Remote Shell
Enables or disables whether the server can be logged into via ssh. By default it is enabled. Press ENTER to toggle between states.
Remote Logging (syslog)
Allows configuration of a remote server to be the destination for syslog messages.
To configure a remote server for sending syslog messages to:
1 Select Remote Logging (syslog) and press ENTER
2 Log in if prompted.
3 A dialog box appears. Enter the destination IP address (or leave it blank to disable remote logging) and press ENTER.
Solution Guide 81
Setup Remote Database
A remote database, if configured, is used to store the XenServer’s xapi agent configuration settings that are changed frequently. The normal copy of the xapi database on the flash media is updated less frequently because it is limited to decrease the number of writes to the device. This setting is particularly useful for diskless servers.
Backup, Restore, and Update
Apply Update
Press ENTER to update the XenServer image on the flash media.
Backup Server State
Press ENTER to backup the server state to removable media.
Restore Server State from Backup
Press ENTER to restore the server state from a backup on removable media.
Technical Support
Validate Server Configuration
Checks the basic configuration of the XenServer. The information displayed is whether VT is enabled on the CPU, whether a default local storage repository has been created, and whether a management NIC has been assigned.
Upload Bug Report
Allows you to upload a bug report file to a support ftp site.
Save Bug Report
Allows you to save a bug report file to removable media.
Enable/Disable Verbose Boot Mode
Controls the level of information displayed as the XenServer boots.
82 Solution Guide
Reset to Factory Defaults
Resets all configuration information to factory default values, deletes all virtual machines, and deletes all Storage Repositories on local disks.
Reboot or Shutdown
This option allows you to shutdown or reboot the server.
Reboot Server
Reboots the server into normal operating mode.
Shutdown Server
Shuts down the server.
Local Command Shell
Opens a local command shell for user root on this server. From this shell you can execute basic Linux commands and also the XenServer xe command-line interface, with which you can manage the XenServer control domain, storage repositories, and virtual machines. We recommend that you use the XenCenter administration console. For detailed reference, and examples of using the CLI for storage management, see the XenServer Administrator's Guide at www.citrix.com/xenserver/dell. For details on using the CLI to install and manage VMs, see the XenServer VM Installation Guide at www.citrix.com/xenserver/dell.
Solution Guide 83
Support Peripherals and Device Drivers in Citrix XenServer Dell Edition 4.1
Network Interface Cards
Vendor Product Citrix XenServer
4.1 Driver Name
Citrix XenServer
4.1 Driver Version
Broadcom Broadcom® NetXtreme 5722 Single Port Gigabit Ethernet NIC
tg3 3.80-rh
Broadcom Embedded Broadcom® NetXtreme II 5708 Gigabit Ethernet NIC
bnx2 1.6.3d
Broadcom Broadcom® NetXtreme II 5708 1-Port Gb Ethernet NIC
bnx2 1.6.3d
Intel Intel® PRO 1000PT Single Port Server Adapter
e1000 7.6.9.2-NAPI
Intel Intel® PRO 1000PT Dual Port Server Adapter
e1000 7.6.9.2-NAPI
Intel Intel® PRO 1000PF Single Port Server Adapter
e1000 7.6.9.2-NAPI
Intel Intel® PRO 1000VT Quad Port Gigabit NIC
igb 1.2.18.4
Intel Intel® 10GBase-SR XF Single Port 10GbE NIC
ixgbe 1.3.7.8-LRO
84 Solution Guide
Storage Controllers
Troubleshooting
This section provides troubleshooting steps for typical issues.
The Server Does Not Boot Into Citrix XenServer Dell Edition
Symptoms:
• The Citrix XenServer Dell Edition software does not boot.
• You receive a "No OS found" message on power on.
Resolution:
Check the boot order - The internal flash storage device may no longer be selected at the first boot device. This can happen if the device has recently been removed due to failure. To correct this:
1 Boot the server; press <F2> when prompted in the upper right corner of the screen.
2 Once in the setup menu, scroll down to "Boot Sequence", press enter.
3 Make sure "Hard drive c:" is selected, press <Enter>.
4 Scroll down to "Hard-Disk Drive Sequence", press <Enter>.
5 Select "Internal USB…" or "SD Card…", press <Enter> and <ESC>. Save your settings.
Vendor Product Citrix XenServer
4.1 Driver Name
Citrix XenServer
4.1 Driver Version
Dell SAS 5/E Controller mptsas 4.00.07.00
Dell SAS 6/iR Integrated Controller
mptsas 4.00.07.00
Dell PERC 6/E SAS RAID Controller
megaraid_sas 00.00.03.16
Dell PERC 6/i Integrated SAS RAID Controller
megaraid_sas 00.00.03.16
Solution Guide 85
Unable to Power on Windows Virtual Machines
Symptom:
On powering on a Windows VM, the error HVM is required for this operation is displayed.
Resolution:
1 Enable CPU Virtualization Technology in BIOS. Log in the XenServer local console shell and issue the following command:
# omconfig chassis biossetup attribute=cpuvt setting=enabled
2 Reboot the system.
References• Dell|Citrix XenServer solutions home page: www.dell.com/xenserver
• Citrix XenServer product documentation: www.citrix.com/xenserver/dell
• Dell OpenManage Sever Administrator 5.4 documentation: support.dell.com/support/edocs/software/svradmin/5.4/index.htm
• Dell OpenManage IT Assistant documentation: support.dell.com/support/edocs/software/smitasst/8.2/index.htm
• Dell PowerVault MD1000 documentation: support.dell.com/support/edocs/systems/md1000/
• Dell PowerVault MD3000 documentation: support.dell.com/support/edocs/systems/md3000/
• Dell PowerVault MD3000i documentation: support.dell.com/support/edocs/systems/md3000i/
• Dell EqualLogic documentation: www.equallogic.com/support/
86 Solution Guide