h3c servers

of 53/53
H3C Servers iFIST User Guide New H3C Technologies Co., Ltd. http://www.h3c.com Software version: iFIST-1.31 or higher Document version: 6W102-20210414

Post on 04-Oct-2021

3 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

New H3C Technologies Co., Ltd. http://www.h3c.com Software version: iFIST-1.31 or higher Document version: 6W102-20210414
Copyright © 2019-2021, New H3C Technologies Co., Ltd. and its licensors
All rights reserved
No part of this manual may be reproduced or transmitted in any form or by any means without prior written consent of New H3C Technologies Co., Ltd.
Trademarks
Except for the trademarks of New H3C Technologies Co., Ltd., any trademarks that may be mentioned in this document are the property of their respective owners.
Notice
The information in this document is subject to change without notice. All contents in this document, including statements, information, and recommendations, are believed to be accurate, but they are presented without warranty of any kind, express or implied. H3C shall not be liable for technical or editorial errors or omissions contained herein.
Preface This preface includes the following topics about the documentation: • Audience. • Conventions. • Documentation feedback.
Audience This documentation is intended for: • Network planners. • Field technical support and servicing engineers. • Server administrators working with the G3 Server.
Conventions The following information describes the conventions used in the documentation.
Command conventions
Convention Description Boldface Bold text represents commands and keywords that you enter literally as shown.
Italic Italic text represents arguments that you replace with actual values.
[ ] Square brackets enclose syntax choices (keywords or arguments) that are optional.
{ x | y | ... } Braces enclose a set of required syntax choices separated by vertical bars, from which you select one.
[ x | y | ... ] Square brackets enclose a set of optional syntax choices separated by vertical bars, from which you select one or none.
{ x | y | ... } * Asterisk marked braces enclose a set of required syntax choices separated by vertical bars, from which you select a minimum of one.
[ x | y | ... ] * Asterisk marked square brackets enclose optional syntax choices separated by vertical bars, from which you select one choice, multiple choices, or none.
&<1-n> The argument or keyword and argument combination before the ampersand (&) sign can be entered 1 to n times.
# A line that starts with a pound (#) sign is comments.
GUI conventions
Convention Description
Boldface Window names, button names, field names, and menu items are in Boldface. For example, the New User window opens; click OK.
> Multi-level menus are separated by angle brackets. For example, File > Create > Folder.
Symbols
Convention Description
WARNING! An alert that calls attention to important information that if not understood or followed can result in personal injury.
CAUTION: An alert that calls attention to important information that if not understood or followed can result in data loss, data corruption, or damage to hardware or software.
IMPORTANT: An alert that calls attention to essential information.
NOTE: An alert that contains additional or supplementary information.
TIP: An alert that provides helpful information.
Network topology icons
Convention Description
Represents a generic network device, such as a router, switch, or firewall.
Represents a routing-capable device, such as a router or Layer 3 switch.
Represents a generic switch, such as a Layer 2 or Layer 3 switch, or a router that supports Layer 2 forwarding and other Layer 2 features.
Represents an access controller, a unified wired-WLAN module, or the access controller engine on a unified wired-WLAN switch.
Represents an access point.
Represents a wireless terminator.
Represents omnidirectional signals.
Represents directional signals.
Represents a security product, such as a firewall, UTM, multiservice security gateway, or load balancing device.
Represents a security module, such as a firewall, load balancing, NetStream, SSL VPN, IPS, or ACG module.
Examples provided in this document Examples in this document might use devices that differ from your device in hardware model, configuration, or software version. It is normal that the port numbers, sample output, screenshots, and other information in the examples differ from what you have on your device.
TT
TT
Documentation feedback You can e-mail your comments about product documentation to [email protected]
We appreciate your comments.
iFIST overview ······························································································· 1
iFIST features and functionality·························································································································· 1 OS Installation Wizard ································································································································ 1 Server Diagnostics ····································································································································· 1
Applicable scenarios ·········································································································································· 1 Applicable products ············································································································································ 1
Guidelines······································································································ 3
Signing in to iFIST ························································································· 4
Preparing for an iFIST sign-in ···························································································································· 4 Prerequisites for a direct iFIST sign-in ······································································································· 4 Prerequisites for a iFIST sign-in through the HDM remote console ··························································· 5
Procedure··························································································································································· 5 iFIST Web interface ··········································································································································· 6
Using the OS installation wizard ···································································· 8
Supported operating systems ···························································································································· 8 Supported storage controllers ···························································································································· 9 iFIST built-in drivers ········································································································································· 10 General restrictions and guidelines ·················································································································· 15 Prerequisites ···················································································································································· 15 OS installation workflow ··································································································································· 16 Configuring basic settings ································································································································ 17 Configuring RAID arrays ·································································································································· 20
Creating a RAID array ······························································································································ 21 Managing physical drives ························································································································· 22 Managing logical drives ···························································································································· 25
Configuring system settings ····························································································································· 26 Verifying the configuration································································································································ 30 Triggering automated operating system installation ························································································ 31
Server diagnostics ······················································································· 33
Restrictions and guidelines ······························································································································ 33 Viewing server module information ·················································································································· 33 Performing fast diagnostics ······························································································································ 35 Performing stress tests ···································································································································· 38 Exporting data ·················································································································································· 40
Downloading logs ························································································ 41
Updating iFIST ····························································································· 42
Procedure························································································································································· 42 Example: Updating iFIST on a server in UEFI boot mode ··············································································· 42
FAQ ············································································································· 47
1
iFIST overview The integrated Fast Intelligent Scalable Toolkit (iFIST) is a single-server management tool embedded in H3C servers. You can access iFIST directly after the server startup and initialization are complete. No manual installation is required.
iFIST enables you to perform a range of configuration and management tasks on the local server from a simple, unified Web interface, including: • Installing operating systems. • Diagnosing key server components. • Downloading logs.
iFIST features and functionality iFIST provides the following features and functionality: • OS Installation Wizard—Configure RAID arrays and install an operating system for the server
on a logical drive. • Server Diagnostics—Diagnose the health status of the components on the server.
OS Installation Wizard Traditionally, administrators must go to different feature pages to complete a complicated set of tasks in order to install an operating system on a server.
iFIST integrates the OS installation tasks into the OS installation wizard that guides you through the installation process step-by-step from a unified interface. The OS installation wizard reduces operation complexity and chances of misconfigurations.
Through the iFIST OS installation wizard, you can configure RAID arrays, install drivers, and export and import configuration files. After the installation configuration is complete, iFIST automatically installs the operating system on the server.
Server Diagnostics Server Diagnostics scans the components on the server to collect statistics for component-based performance and health diagnosis. It facilitates server troubleshooting and reduces the risks of unexpected problems during server usage.
Server Diagnostics supports diagnosing various components on the server, including the CPU, PSU, fan, HDM, memory, and PCIe devices.
Applicable scenarios When access to a remote HDM system is not available, you can use iFIST for in-band local server management.
To use iFIST on a server, you must connect a monitor, a keyboard, and a mouse to the server.
Applicable products This guide is applicable to the following products:
2
• H3C UniServer R2700 G3 • H3C UniServer R2900 G3 • H3C UniServer R4400 G3 • H3C UniServer R4700 G3 • H3C UniServer R4900 G3 • H3C UniServer R4950 G3 Naples • H3C UniServer R4950 G3 Rome • H3C UniServer R5300 G3 • H3C UniServer R6700 G3 • H3C UniServer R6900 G3 • H3C UniServer R8900 G3 • H3C UniServer B5700 G3 • H3C UniServer B5800 G3 • H3C UniServer B7800 G3 • H3C UniServer E3200 G3 • H3C UniServer R4700 G5 • H3C UniServer R4900 G5 • H3C UniServer R4950 G5 Rome • H3C UniServer R5300 G5 • H3C UniServer R5500 G5 AMD • H3C UniServer R6900 G5 • H3C UniServer B5700 G5
3
Guidelines The information in this document might differ from your product if it contains custom configuration options or features.
The model name of a hardware option in this document might differ slightly from its model name label. A model name label might add a prefix or suffix to the hardware-coded model name for purposes such as identifying the matching server brand or applicable region. For example, storage controller model HBA-1000-M2-1 represents storage controller model label UN-HBA-1000-M2-1, which has a prefix of UN-.
The webpage screenshots used in this document are for illustration only and might differ from your products.
To obtain help information when you use iFIST, click the question mark icon at the upper right of the webpage.
4
Signing in to iFIST Preparing for an iFIST sign-in
You can sign in to iFIST on a server either directly or from the remote console of the HDM Web interface.
The following information describes the prerequisites for a successful sign-in to iFIST.
Prerequisites for a direct iFIST sign-in To sign in to iFIST on a server directly, you must connect a monitor, a mouse, and a keyboard to the server.
For a rack server such as an H3C UniServer R4900 G3 server: • Connect the monitor to the server through the VGA connector. • Connect the mouse and keyboard to the server through the USB connectors.
Figure 1 Connecting a monitor, a mouse, and a keyboard to a rack server
For a blade server such as an H3C UniServer B5800 G3 server, connect the monitor, mouse, and keyboard to the server through SUV connectors, as shown in Figure 2.
5
Figure 2 Connecting a monitor, a mouse, and a keyboard to a blade server
Prerequisites for a iFIST sign-in through the HDM remote console
Prepare the hardware environment for signing in to iFIST through HDM. For more information, see H3C Servers HDM User Guide.
Procedure 1. Launch the remote console from the HDM Web interface. For more information, see H3C
Servers HDM Quick Start Guide. Skip this step if a local direct KVM connection is used.
2. Reboot the server. 3. On the POST screen shown in Figure 3, press F10.
6
Figure 3 Launching iFIST from the POST screen (BIOS version 2.00.26)
The Web interface of iFIST is displayed, as shown in Figure 4.
iFIST Web interface As shown in Figure 4, the iFIST Web interface contains the following areas:
Area Description
Administrative section
• —Click the button to return to the iFIST home page.
• —Click the button to view the iFIST version information.
• / —Click the button to change the display language.
• —Click the button to exit iFIST and reboot the server.
• —Click the button to download logs.
Work pane Displays links to the functions provided by iFIST. To obtain help information when you use iFIST, click the question mark icon
at the upper right corner.
7
8
Using the OS installation wizard Supported operating systems
You can install the following types of operating systems through the iFIST OS installation wizard: • Red Hat Enterprise Linux. • SuSE Linux Enterprise Server. • CentOS. • Ubuntu Server. • VMware ESXi. • CAS. • Oracle Linux. • Windows Server (except Windows Core OS).
Table 1 lists the operating systems and their versions that can be installed through the iFIST OS installation wizard.
Table 1 Supported operating systems
OS type Version
SuSE Linux Enterprise Server
SLES 15 (64 bit) (includes XEN & KVM)
9
CentOS
VMware ESXi 6.7 (64 bit)
VMware ESXi 6.7 U3 (64 bit)
VMware ESXi 7.0 (64 bit)
Ubuntu Server Ubuntu Server 17.10 (64 bit) – LTS
Ubuntu Server 18.4 (64 bit) – LTS
CAS CAS 5.0
Windows Server
Microsoft Hyper-V Server 2012 R2
Microsoft Windows Server 2016 Essential
Microsoft Windows Server 2016 Standard
Microsoft Windows Server 2016 Datacenter
Microsoft Hyper-V Server 2016
Microsoft Hyper-V Server 2019
Supported storage controllers The iFIST OS installation wizard supports the following types of storage controllers: • HBA-1000-M2-1 • RAID-P430-M1 • RAID-P430-M2
10
• HBA-H460-M1 • RAID-P460-M4 • HBA-H460-B1 • RAID-P460-B4 • HBA-LSI-9311-8i-A1-X • RAID-LSI-9361-8i(1G)-A1-X • RAID-LSI-9361-8i(2G)-1-X • RAID-LSI-9460-8i(2G) • RAID-LSI-9460-8i(4G) • RAID-LSI-9460-16i(4G) • RAID-L460-M4 • RAID-P5408-Mf-8i-4GB • HBA-H5408-Mf-8i • HBA-LSI-9440-8i • HBA-LSI-9300-8i-A1-X • RAID-P4408-Mf-8i-2GB • RAID-P2404-Mf-4i-2GB • RAID-P5408-Ma-8i-4GB • RAID-P4408-Ma-8i-2GB • RAID-P460-B2 • RAID-P460-M2
iFIST built-in drivers When you install Windows on a server through iFIST, you can select to install built-in drivers of iFIST as needed. Table 2 displays the list of iFIST built-in drivers.
Table 2 iFIST built-in drivers
Driver name Driver version Supported OSs Driver name
FC-HBA-LPe310 02 12.6.165.0
• Microsoft Windows Server 2016
• Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3; • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
FC-HBA-LPe310 00 12.6.165.0
• Microsoft Windows Server 2016
• Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
11
• Microsoft Windows Server 2016
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
FC-HBA-LPe120 00 12.0.367.0 Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
FC-HBA-LPe120 02 11.2.139.0
• Microsoft Windows Server 2016
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
FC-HBA-LPe120 02 12.0.367.0
• Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
RAID-9361-8i-1G 6.714.18.0
• Microsoft Windows Server 2016
• Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
RAID-9361-8i-2G 6.714.18.0
• Microsoft Windows Server 2016
• Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
RAID-P430-M2 7.5.0.57011 • Microsoft Windows Server
2012 R2 • Microsoft Windows Server
• R2700 G3 • R2900 G3
RAID-P430-M1 7.5.0.57011
• Microsoft Windows Server 2016
• Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R4300 G3
HBA-9300-8i 2.51.26.0
• Microsoft Windows Server 2016
• Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
HBA-H460-M1 106.190.4.1062
• Microsoft Windows Server 2016
• Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R4300 G3
HBA-9311-8i 2.51.26.0
• Microsoft Windows Server 2016
• Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
RAID-9460-16i 7.710.8.0
• Microsoft Windows Server 2016
• Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
RAID-9460-8i 7.708.12.0 Microsoft Windows Server 2012 R2
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
RAID-9460-8i 7.713.12.0
13
RAID-P460-M2 106.178.0.1009
• Microsoft Windows Server 2016
• Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R4300 G3
RAID-P460-M4 106.84.2.64
• Microsoft Windows Server 2016
• Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R4300 G3
NIC-560F-B2 3.14.78.0 Microsoft Windows Server 2012 R2
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
NIC-560F-B2 4.0.217.0 Microsoft Windows Server 2016
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
NIC-560F-B2 12.18.9.1 Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
NIC-360T-L3 1.6.31.0
• Microsoft Windows Server 2016
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3
NIC-360T-L3 1.10.130.0 Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3
NIC-530F-B2 7.13.109.0 • Microsoft Windows Server • R2700 G3
14
2016
• R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
NIC-530F-B2 7.13.171.0 Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
NIC-360T-B2 12.14.7.0 Microsoft Windows Server 2012 R2
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
NIC-360T-B2 12.15.184.0 Microsoft Windows Server 2016
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
NIC-360T-B2 12.18.9.1 Microsoft Windows Server 2019
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
NIC-560F-L2 1.6.31.0
• Microsoft Windows Server 2016
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
NIC-560T-L2 1.6.31.0
• Microsoft Windows Server 2016
15
• R4900 G3 • R6900 G3 • R8900 G3 • R6700 G3 • R4300 G3
ASPEED-Graphic s-Family 1.01 Microsoft Windows Server 2012
R2
• R2700 G3 • R2900 G3 • R4700 G3 • R4900 G3
General restrictions and guidelines To determine the chip type of the storage controller, see H3C Servers Storage Controller User Guide.
Make sure only one bootable medium is mounted to the server. If more than one bootable medium is mounted, the server might fail to identify the correct boot media, and operating system installation will fail as a result.
The OS installation wizard supports the following controllers: • PMC controllers in RAID (Hide-RAW), HBA, or Mixed mode. • LSI controllers in RAID mode.
You can change the image source only in the basic settings step. After the installation starts, do not remove the image source or intervene manually.
To install an operating system on a server in UEFI boot mode, make sure only the system drive (target logical drive) contains a UEFI partition. Operating system installation will fail if a UEFI partition exists on a non-system drive.
The OS installation wizard does not support OS installation or RAID configuration on the onboard RAID controller.
Prerequisites Before using the OS installation wizard, complete the following tasks: • Mount the storage media that contains the OS image to the server. Supported storage media
include CD (physical CD or HDM virtual media) and USB flash drive. • If driver installation is required, mount to the server the storage media that contains a REPO
image file in a format matching the ISO image format. iFIST cannot recognize compressed driver installation packages.
• If iFIST is accessed through the HDM remote console, you must mount both the OS image file and REPO image file to the HDM remote console. Figure 5 shows an example of mounting to the KVM remote console the OS image file and REPO image file stored on a virtual CD.
16
Figure 5 Mounting the OS image file and REPO image file
OS installation workflow Figure 6 shows the workflow of using the OS installation wizard to install an operating system.
17
Figure 6 OS installation workflow
Configuring basic settings The configuration parameters in this task vary by the storage controller used.
Procedure 1. On the iFIST home page, click OS Installation Wizard.
The OS installation wizard displays the Configure basic settings page, as shown in Figure 7. For descriptions of the parameters on the page, see "Parameters."
StartStart
Yes
drive not found?
mode?
mode?
Yes
mode
mode
Click NextClick Next
Access Verify configuration
Click NextClick Next
EndEnd No
Access Configure RAID
Select target controller
Select target controller
Select image source
Select image source
Figure 7 Configuring basic settings
2. Select the storage controller to be configured from the Target controller list. 3. Check the Controller mode field to verify that the controller is operating in the supported mode.
If a PMC controller is used, the controller mode must be RAID (Hide-RAW), HBA, or Mixed. If an LSI controller is used, the controller mode must be RAID.
4. From the Global write cache list, select a global write cache mode for the physical drives attached to the controller. Alternatively, set the write cache mode for drives in the Physical drive write cache area.
5. From the Configuration method list, select Customize config or Import config file. 6. If Import config file is selected, specify the location of the configuration file to be imported.
The configuration file to be imported must meet the following requirements: The file must meet the validation criteria described in "Configuration file validation." Verify
the file against the validation criteria before the import. The file must be stored on a USB flash drive formatted with the FAT32 or NTFS file system. If the Import config file configuration method is used, iFIST automatically creates the logical drives by using all the available capacities of the member drives on the server. The logical drive capacity settings in the configuration file are not imported.
7. Select the type of media where the OS image file resides. Options are CD (physical CD or HDM virtual media) and USB flash drive.
8. (Optional.) Select the driver source. Options are CD (physical CD or HDM virtual media) and USB flash drive.
9. Click Next.
Parameters • Target controller—Select the storage controller to be configured. • Controller mode—Verify that the operating mode of the selected storage controller is
supported by iFIST.
19
For a PMC controller, the controller mode must be RAID (Hide-RAW), HBA, or Mixed. For an LSI controller, the controller mode must be RAID.
• JBOD—Indicates whether the Just a Bunch Of Disks (JBOD) mode is enabled or disabled. The value can be ON or OFF. ON—The JBOD mode is enabled. The operating system can have access to a disk directly
without creating a RAID volume first. OFF—The JBOD mode is disabled. The operating system is not able to see a disk until the
disk in included in a RAID volume. This parameter is not displayed if the storage controller does not support the JBOD attribute.
• Global write cache—This parameter is displayed only if it is supported by the selected storage controller. Set the global write cache mode for the physical drives attached to the storage controller. Options are: Enabled—Enables write cache for all physical drives. Enabling write cache for physical
drives improves the system's read and write performance. Disabled—Disables write cache for all physical drives. Write cache is typically disabled for
physical drives used to build logical drives to prevent data loss in case of power failures. Drive specific—Sets the write cache policy for physical drives individually.
• Physical drive write cache—This parameter is displayed only if it is supported by the selected storage controller. Set the write cache mode for the following types of drives separately. Configured Drives—Configured physical drives attached to the controllers operating in
RAID or Mixed mode. Unconfigured Drives—Unconfigured physical drives attached to the controllers operating
in RAID or Mixed mode. HBA Drives—Physical drives attached to storage controllers operating in HBA mode. Supported write cache modes are: Default—Uses the default write cache mode for physical drives. Enabled—Enables write cache for physical drives. Disabled—Disables write cache for physical drives.
• Configuration method—Select the method for configuring RAID and operating system installation parameters. Options are: Customize config—Manually configures RAID and operating system installation settings. Import config file—Imports the settings from a configuration file stored on a floppy drive or
a USB flash drive mounted to the server. • Image source—Select the type of media where the image file resides. Options are CD
(physical CD or HDM virtual media) and USB flash drive. If you select USB flash drive, iFIST displays the paths of image files detected on the USB flash drive in a list next to this field. Select an image file from the list. Follow these guidelines when you configure the image source parameters: To install the SuSE Linux Enterprise Server operating system, make sure the following
conditions are met: − The image file resides on a USB flash drive partition formatted with the FAT32 file
system. − The pathname of the image file (including the image file name) does not contain
Chinese characters or spaces. To install the Red Hat Enterprise Linux operating system, make sure the following
conditions are met: − The image file resides on a USB flash drive partition formatted with the FAT32 or
EXT2/3/4 file system.
20
− The pathname of the image file (including the path file name) does not contain spaces. To install the CentOS operating system, make sure the following conditions are met:
− The image file resides on a USB flash drive partition formatted with the FAT32 or EXT2/3/4 file system. If a USB flash drive partition is formatted with the FAT32 file system, make sure the image file size does not exceed 4 GB.
− The pathname of the image file (including the image file name) does not contain Chinese characters or spaces.
To install the Red Hat Enterprise Linux 6.7/6.8/6.9 or CentOS 6.10 operating system, make sure the following conditions are met: − The USB flash drive partition where the image file resides has 300 MB or more free
space. − To avoid installation failure, save only one ISO image file in the image file directory.
To install the Ubuntu Server or CAS operating system, make sure the image source is CD (physical CD or HDM virtual media).
To install the VMware ESXi operating system, make sure the image file resides on a USB flash drive partition formatted with the FAT32 or NTFS file system.
To install the Windows Server operating system, make sure the image file resides on a USB flash drive partition formatted with the NTFS file system.
To install the Oracle Linux operating system, make sure the following conditions are met: − The image file resides on a USB flash drive partition formatted with the EXT2/3/4 file
system. − The pathname of the image file (including the image file name) contains only letters and
digits. • Driver source—Select the storage medium where the REPO image resides. Options are CD
(physical CD or HDM virtual media) and USB flash drive.
Configuration file validation To ensure successful configuration file import into a server, make sure the following conditions are met: • For a server installed with multiple storage controllers, make sure the following conditions are
met: The number of installed storage controllers must be equal to or greater than the number of
storage controllers specified in the configuration file. The type, mode, and slot specified for each storage controller in the configuration file must
match the type, mode, and slot of each storage controller installed in the server. • If you fail to import the configuration file to a server installed with multiple storage controllers,
you can use other methods to install OS to the storage controllers. • For each logical drive in the file, all the member physical drives must be present on the
corresponding slots of the server and must meet the following requirements: If a PMC controller is used, the drives must be in Raw, Ready, or Online state. If an LSI controller is used, the drives must be in Unconfigured Good, Unconfigured Bad, or
Online state. The physical drives of an HBA-LSI-9300-8i-A1-X controller must be in Ready state. The physical drives of an HBA-LSI-9311-8i-A1-X controller must be in Ready or Optimal state.
Configuring RAID arrays RAID array configuration involves the following tasks:
21
• Create RAID arrays—Create RAID arrays by using physical drives in Ready state on the server.
• Manage physical drives—Initialize and uninitialize physical drives, and set the cache mode for physical drives.
• Manage logical drives—Set the cache mode for the logical drives and delete logical drives as needed.
Creating a RAID array Procedure
1. On the Manage logical drives tab, identify whether the logical drive on which you want to install the operating system already exists. If the logical drive already exists, click Next to go to the next step directly. If the logical drive list does not contain such a logical drive, perform the following steps to create one first.
2. On the Configure RAID arrays page, click the Create RAID array tab.
Figure 8 Create RAID array tab
3. To create a RAID array: a. Select one or more physical drives. b. Click Create.
The Create RAID Array window opens, as shown in Figure 9. For descriptions of the parameters on this window, see "Parameters."
22
Figure 9 Create RAID Array window (HBA controller)
a. Set the name, RAID level, stripe size, and initialization method for the RAID array. b. Click OK.
4. To create a RAID 0 or simple volume logical drive on each physical drive in Ready or Unconfigured Good state: a. Select the Create a RAID 0 or simple volume logical drive on each physical drive in
Ready state option. b. Click Create. c. In the confirmation dialog box that opens, click OK. This feature is not available if an HBA-LSI-9311-8i-A1-X controller is used.
Parameters • Name—Name of the RAID array. • RAID level—RAID level. • Capacity—This field is automatically populated with the maximum capacity of the RAID array,
which cannot be modified. • Stripe size—Data block size written to each physical drive in the RAID array. The default is 256
KB. • Method—Initialization method of the RAID array. • Write cache—This field is supported by RAID controllers only. • Read cache—This field is supported by RAID controllers only.
Managing physical drives About physical drive management
To use physical drives to create logical drives, make sure the physical drives are in the correct states.
23
Physical drive management allows you to view the status of physical drives, initialize and uninitialize physical drives, and set the cache mode for the physical drives.
Procedure 1. On the Configure RAID arrays page, click the Manage physical drives tab.
Figure 10 Manage physical drives tab (PMC controller)
2. If a PMC controller is used, select one or more physical drives and perform the following tasks as needed: Click Set cache mode to set the cache mode for the drives. In the Set Cache Mode
window shown in Figure 11, select Disabled (write through) or Enabled (write back), and click OK. The Set cache mode option is available only when the following conditions are met: − A PMC controller (except a PMC HBA or UN-RAID-P460-M4 RAID controller) is used. − On the Configure basic settings page, the Global write cache parameter is set to
Drive specific.
Figure 11 Set Cache Mode window
Click Unintialize to uninitialize the drives. Click Initialize to initialize the drives.
3. If an LSI controller is used, you can perform the following tasks: Select drives and then click Set JBOD State to set the state of the selected drives to JBOD
The Set JBOD State button is available only when the LSI controller is in RAID mode and the JBOD attribute is ON.
24
Select drives and then click Set state to set the state of the physical drives to Unconfigured Good in the following scenarios: − The LSI controller is in RAID mode and the JBOD attribute is ON, and the state of the
physical drives is Unconfigured Bad, Unconfigured Bad-F, or JBOD. − The LSI controller is in RAID mode and the JBOD attribute is OFF, and the state of the
physical drives is Unconfigured Bad or Unconfigured Good-F.
Parameters Parameters on the Manage physical drives tab: • Device—Screen-printed slot number or device ID of the physical drive. • Status—State of the physical drive.
If a PMC controller is used, the physical drive states include: Online—The physical drive is already used to build a RAID array. Ready—The physical drive is initialized and can be used to build RAID arrays. Raw—The physical drive is a raw drive and must be initialized before it can be used to build
a RAID array. Failed—The physical drive is faulty. If an LSI controller is used, the physical drive states include: Online—The physical drive is already used to build a RAID array. Offline—The physical drive is offline. Unconfigured Good—The physical drive can be used to build a RAID array. Unconfigured Bad—The physical drive is faulty. Ready—The physical drive can be used to build RAID arrays. Optimal—The physical drive is already used to build a RAID array. Failed—The physical drive is faulty. JBOD—The physical drive is a straight through drive and can be used in the operating
system even if no RAID array is built. • Operation result—Result of the most recent operation performed on the physical drive. • Set cache mode—Allows you to set the cache mode for the selected drives. • Initialize—Allows you to initialize the selected drives.
A physical drive in Raw state must be initialized before it can be used to build a RAID array. Initializing a physical drive erases all data on the drive and sets apart a small section of space on the drive for storing RAID data.
• Uninitialize—Allows you to uninitialize the selected drives. Uninitializing a physical drive erases all data including metadata on the drive, removes the reserved space section and the system partition, and restores the drive to Raw state.
Parameters on the Set Cache Mode window: • Cache mode—Select a cache mode. Options are:
Disabled (write through)—New data is written to the cache and the physical drive at the same time. Writing data will experience latency as the data need to be written to two places.
Enabled (write back)—New data is written only to the cache. The data is written to the physical drive only when it needs to be replaced and removed from the cache. This mode boasts low latency but entails data loss risks because a power failure might prevent the data from being written to the physical drive.
25
Managing logical drives 1. On the Configure RAID arrays page, click the Manage logical drives tab.
Figure 12 Manage logical drives tab
2. To set the cache mode for a logical drive: a. Select the logical drive. b. Click Set cache mode. c. In the Set Cache Mode window that opens, select a cache mode and click OK.
Figure 13 Set Cache Mode window for an LSI RAID controller
Figure 14 Set Cache Mode window for a PMC RAID controller
3. Click Next.
Parameters • Name—Name of the logical drive. • RAID level—RAID level of the logical drive.
26
• Status—State of the logical drive. • Capacity—Capacity of the logical drive. • Cache mode—Cache mode of the logical drive. • Member drives—Physical drives used to create the logical drive. • Set cache mode—Allows you to set the read and write policies for the selected logical drives. A
supercapacitor must be available to power the cache module in case of power failures to ensure data integrity. This option is available only when the RAID controller supports power fail safeguard. Supported read and write policies are: Read ahead always/Enabled (read cache)—Always use the read-ahead policy. When
retrieving data from the logical drive, the system also retrieves subsequent data and saves the subsequent data to the cache. Then, the subsequent data can be retrieved from the cache directly when requested. The read-ahead policy reduces the hard drive seek time and improves data retrieval efficiency. To use this policy, make sure the RAID controller supports power fail safeguard. This policy entails data security risks because data loss might occur in case of supercapacitor exceptions.
No read ahead/Disabled (read cache)—Use the no-read-ahead policy. The system starts to retrieve data from the logical drive only when the data read request is received by the RAID controller.
Write back/Enabled (write back when protected by battery/ZMM)—Use the write-back policy. If the RAID controller has a functional BBU present, data is written first to the controller cache before being written to the drive. If the RAID controller does not have a functional BBU present, write-through is resumed and data is written directly to the drive.
Always write back/Enabled (write back)—Use the always-write-back policy. The controller sends a write-request completion signal as soon as the data is in the controller cache but has not yet been written to the drive. This policy improves write efficiency but requires that the RAID controller support power fail safeguard. This policy entails data security risks because data loss might occur in cases of supercapacitor exceptions.
Write through/Disabled (write through)—Use the write-through policy. The controller writes data to the drive directly without first writing the data to the cache. It sends a write-request completion signal only after the data is written to the drive. This policy does not require that the RAID controller support power fail safeguard and does not entail data loss risks in the event of supercapacitor exceptions. However, the write efficiency is relatively low.
• Delete—Allows you to delete the selected logical drives.
Configuring system settings The configuration parameters in this task vary by the operating system to be installed.
Procedure 1. On the Configure system settings page, configure the operating system-specific parameters
as follows: For a Linux operating system, specify the hostname (optional), root password, username,
user password, language, and network settings, as shown in Figure 15.
27
Figure 15 Setting the parameters for installing a Linux operating system
For a Microsoft Windows operating system, select the drivers to install, set the image file, hostname (optional), password (optional), key (optional), and primary partition capacity, as shown in Figure 16. The available drivers include all iFIST built-in drivers listed in Table 2.
28
Figure 16 Setting the parameters for installing a Windows operating system
For a VMware ESXi operating system, set the hostname (optional), root password, and network settings, as shown in Figure 17.
Figure 17 Setting the parameters for installing a VMware ESXi operating system
29
2. In the Target drive field, select the drive where you want to install the operating system. This parameter is required for all operating systems.
3. Click Next.
Parameters • Image type—Type of the OS image mounted to the server. Only Microsoft Windows and
Linux are supported. • Driver—List of drivers and FIST SMS, which can be selected for installation.
With a REPO file mounted to the server for OS installation, iFIST displays the drivers in the file that can be installed together with the OS if the following conditions are met: The REPO file matches the server cards. The OS in the file is not a VMware ESXi system.
• OS type—Operating system type of the mounted image. Supported operating system types include: Red Hat Enterprise Linux. SuSE Linux Enterprise Server. CentOS. VMware ESXi. Ubuntu Server. CAS. Oracle Linux. Microsoft Windows Server.
• Image file—Image file of the operating system to be installed. • Hostname—Specify the hostname of the server. If you do not specify a hostname for a
Windows operating system, an automatically assigned hostname is used. If you do not specify a hostname for a Linux operating system, localhost is used. When IPv4 settings are configured as DHCP for the VMware ESXi system, the hostname cannot be configured.
• Password—If a Windows operating system is to be installed, enter the password used to log in to the operating system. If a Linux operating is to be installed, enter the Linux root password.
• User name—This parameter is available only when a Linux operating system is to be installed. Enter the user name used to log in to the operating system. The VMware ESXi and CAS operating systems do not support configuring the user name.
• User password—This parameter is available only when a Linux operating system is to be installed. Enter the user login password. The VMware ESXi and CAS operating systems do not support configuring the user password.
• Language—Select the language used in the operating system. This parameter is available only when a Linux operating system is to be installed. The VMware ESXi and CAS operating systems use English by default, which cannot be modified.
• Platform language—This parameter is available only when a CAS system is to be installed. Select the language used in the CAS platform. Options are Simplified Chinese and English.
• Network settings—This area is available only when a Linux operating system is to be installed. Select an IP address obtaining method in the IPv4 settings and IPv6 settings subareas separately. Options are: DHCP—Obtains an IPv4 or IPv6 address through DHCP. Static—Uses a manually configured IPv4 or IPv6 address. If this method is used, you must
manually configure the following parameters: − IPv4 address, subnet mask, and optionally the default gateway address in the IPv4
settings area.
30
− IPv6 address, subnet prefix length, and the default gateway address in the IPv6 settings area.
The following guidelines apply when you configure the network settings: For the CAS operating system:
− In the IPv4 settings area, the DHCP option is not available. − The default gateway address and the IPv4 address specified in the IPv4 settings area
must reside on the same network segment. − The IPv6 settings area is not available.
For the VMware ESXi operating system: − The default gateway address and the IPv4 address specified in the IPv4 settings area
must reside on the same network segment. − The Static option in the IPv6 settings area is not available. The IPv6 address obtaining
method can only be DHCP. For the Ubuntu Server operating system:
− The default gateway address and the IPv4 address specified in the IPv4 settings area must reside on the same network segment.
− If you select Static in the IPv4 settings area, the IPv6 settings area becomes unavailable. If you select Static in the IPv6 settings area, the IPv4 settings area becomes unavailable.
• Key—This parameter is available only when a Windows operating system is to be installed. Enter the key required for operating system installation.
• Target controller—Select the storage controller where the operating system is to be installed. This parameter is available only when the server has multiple storage controllers installed.
• Target drive—Select the drive where the operating system is to be installed. • Primary partition capacity—This parameter is available only when a Windows operating
system is to be installed. Specify the capacity of the primary partition. A minimum capacity of 50 GB is required for operating system installation. When the server has a large physical memory, set the primary partition capacity to the maximum value as a best practice. If a Linux operating system is to be installed, the maximum capacity of the target drive is used as the primary partition capacity by default and cannot be changed. The minimum primary partition capacity required for Linux operating system installation is 80 GB.
Verifying the configuration Procedure
1. On the Verify configuration page, verify that the operating system installation settings are correct.
31
Figure 18 Verifying the configuration
2. To revise the settings, click Previous. If no revision is required, click Next. 3. To export the RAID and operating system installation settings to a file:
a. Click Export configuration. b. Select the storage device where you want to store the exported file and set the exported file
format (xml or img). The exported file is MD5-encrypted to prevent file tampering.
c. Click OK. You can import the configuration file into another server on the Configure basic settings page. For a server configured with multiple storage controllers, you might export the configuration file of the server successfully if no OS is installed in the storage controllers. The configuration file must be exported to a USB flash drive formatted with the FAT32 or NTFS file system.
Triggering automated operating system installation Restrictions and guidelines
Do not remove the boot media before the OS installation is complete.
After the OS installation is complete, install the related drivers as soon as possible to ensure correct operation of the operating system.
The server might automatically restart multiple times during installation of a Windows operating system.
32
Procedure After you click Next on the Verify configuration page, iFIST starts to prepare the server for the OS installation and displays the real-time progress, as shown in Figure 19.
After the preparation is complete, iFIST reboots the server and installs the operating system.
The server is automatically restarted after the OS installation is complete without manual intervention.
Figure 19 Installing the operating system on the server
33
Server diagnostics Server Diagnostics scans the server modules to collect statistics for module-based performance and health diagnosis. It facilitates server troubleshooting and reduces the risks of unexpected problems during server usage.
Server Diagnostics supports diagnosing various modules on a server, including the BIOS, CPU, memory, hard disks, storage controllers, logical drives, network adapters, GPUs, other PCIe devices, PSUs, fans, and temperature.
Restrictions and guidelines If the server uses HDM of a version earlier than version 1.30.08, iFIST cannot detect or diagnose the BIOS, PSU, fan, or temperature modules on the server. This restriction does not apply if the server uses HDM-1.30.08 or a later version.
The Server diagnostics feature is available only on the following server models: • H3C UniServer R2700 G3 • H3C UniServer R2900 G3 • H3C UniServer R4700 G3 • H3C UniServer R4900 G3 • H3C UniServer R6700 G3 • H3C UniServer R6900 G3 • H3C UniServer B5700 G3 • H3C UniServer B5800 G3 • H3C UniServer B7800 G3
Viewing server module information Restrictions and guidelines
A server re-scanning is recommended after a server module change or a hot swapping occurs.
iFIST cannot detect modules that are unidentifiable to the operating system.
Procedure 1. On the iFIST home page, click Server Diagnostics.
iFIST starts to scan the server and displays the system information and server module information on the Device Info tab, as shown in Figure 20.
34
Figure 20 Device Info tab
2. To view information about a specific server module, select the module from the Select modules list. By default, All modules is selected and information about all detected server modules are displayed. iFIST displays N/A for a server module if it cannot obtain the server module information.
Parameters • System info—Basic server information, including the server's vendor, name, serial number,
UUID, and the serial number of the system board. • Select modules—To view information about a specific module, select the module from the list.
Supported modules and the information displayed for them are as follows: BIOS—Basic information about the BIOS, including the BIOS vendor, version, release date,
ROM size, and supported features. HDM—HDM information, including the firmware version, serial number, CPLD version,
event count, recent events, POST result, TCP port numbers for HTTP and Telnet services, shared port address information, and dedicated port address information.
CPU—CPU information including the maximum number of CPUs supported. For each detected CPU, the following information is displayed: socket ID, version, core count, enabled core count, SMBIOS structure handle, current and maximum speeds, external clock speed of the processor socket, level-1 data and instruction cache capacities, level-2 and level-3 cache capacities, stepping, and vendor ID.
Memory—Memory information, including the maximum number of memory chips supported and total memory size. For each memory chip, the following information is displayed: slot number, type, vendor, DIMM description, DIMM size, memory DRAM type, serial number, speed, correctable error status, and correctable error count.
Storage—Information about storage controllers, logical drives, and physical drives.
− Storage controller&md