nutanix field installation guide-v1 2

41
Field Installation Guide Foundation 1.2 18-Feb-2015

Upload: dean-sti

Post on 06-Jan-2016

227 views

Category:

Documents


17 download

DESCRIPTION

Nutanix Field Installation Guide

TRANSCRIPT

Page 1: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 1/41

 

Field Installation Guide

Foundation 1.2

18-Feb-2015

Page 2: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 2/41

Copyright | Field Installation Guide | Foundation | 2

Notice

Copyright

Copyright 2015 Nutanix, Inc.

Nutanix, Inc.

1740 Technology Drive, Suite 150

San Jose, CA 95110

 All rights reserved. This product is protected by U.S. and international copyright and intellectual property

laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks

and names mentioned herein may be trademarks of their respective companies.

License

The provision of this software to you does not grant any licenses or other rights under any Microsoft

patents with respect to anything other than the file server implementation portion of the binaries for this

software, including no licenses or any other rights in any hardware or any devices or software that are used

to communicate with or in connection with this software.

Conventions

Convention Description

variable_value The action depends on a value that is unique to your environment.

ncli> command  The commands are executed in the Nutanix nCLI.

user@host$ command  The commands are executed as a non-privileged user (such as nutanix)

in the system shell.

root@host# command  The commands are executed as the root user in the hypervisor host

(vSphere or KVM) shell.

> command  The commands are executed in the Hyper-V host shell.

output The information is displayed as output from a command or in a log file.

Default Cluster Credentials

Interface Target Username Password

Nutanix web console Nutanix Controller VM admin admin

vSphere client ESXi host administrator nutanix/4u

Page 3: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 3/41

Copyright | Field Installation Guide | Foundation | 3

Interface Target Username Password

SSH client or console ESXi host root nutanix/4u

SSH client or console KVM host root nutanix/4u

SSH client Nutanix Controller VM nutanix nutanix/4u

Version

Last modified: February 18, 2015 (2015-02-18 15:23:03 GMT-8)

Page 4: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 4/41

4

Contents

Release Notes...................................................................................................................5Release 1.2.......................................................................................................................................... 5

1: Overview...................................................................................................6Imaging Nodes..................................................................................................................................... 6

Summary: Imaging a Cluster....................................................................................................6

Summary: Imaging a Node.......................................................................................................7

Supported Hypervisors.........................................................................................................................7

2: Preparing Installation Environment.......................................................8

3: Imaging a Cluster..................................................................................14

Fixing IPMI Configuration Problems..................................................................................................21Fixing Imaging Problems...................................................................................................................22

Cleaning Up After Installation............................................................................................................24

4: Imaging a Node..................................................................................... 25Installing a Hypervisor....................................................................................................................... 25

Installing ESXi......................................................................................................................... 28

Installing Hyper-V....................................................................................................................29

Installing the Controller VM............................................................................................................... 33

5: Foundation Portal..................................................................................36 Accessing the Portal..........................................................................................................................36Foundation Files.................................................................................................................................37

Phoenix Files......................................................................................................................................38

6: Setting IPMI Static IP Address.............................................................40

Page 5: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 5/41

Release Notes | Field Installation Guide | Foundation | 5

Release Notes

Release 1.2

This release includes the following changes and enhancements:

• Hypervisor imaging support has been expanded to include Hyper-V and KVM (as well as ESXi). Hyper-

V imaging is limited to a maximum of 20 nodes.

• When using a flat switch (no routing tables), a multi-homing option has been added that allows you

to specify a production IP configuration (addresses across subnets) without being on the production

network. This essentially allows you to use different subnets for IPMI, hypervisor, and Controller VM

which enables you to run cluster create with the intended production IPs during imaging. This is an

enhancement from previous Foundation releases where you had the limitation of imaging everything in

one subnet only and were forced to use cluster_init.html  after putting the machine in the production

rack.

• The ability to specify Controller VM IP information has been added.

• The ability to specify the amount of Controller VM RAM has been added. This is especially useful if a

user would like to use advanced features such as deduplication.

• Support has been added to create a cluster after imaging the nodes. This allows you to use Foundation

to perform the cluster configuration steps previously done through the cluster_init.html  page.

• Support has been added to image bare metal nodes with the use of the MAC address of the IPMI

interface.

• A ping test feature was added to let the user ping the specified IPs in order to check for potential

conflicts before starting an imaging session.

• Foundation can now image up to 20 nodes simultaneously. Previous releases were limited to a

maximum of eight nodes simultaneously.

• Clicking the aggregate progress bar at the top of the progress monitor page now displays the

Foundation system.log contents in the log pane.

• Online help documentation is now available from the Foundation GUI (requires Internet access).

• Foundation version 1.2 requires the download of a new Foundation VM, which is now distributed in the

form of an OVF. In addition, Phoenix version 1.2 ISOs are compatible only with Foundation 1.2 VMs or 

software packages. Phoenix version 1.1 and 1.0 ISOs are not supported with Foundation 1.2.

Page 6: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 6/41

Overview | Field Installation Guide | Foundation | 6

1

Overview

Nutanix installs the Nutanix Operating System (NOS) Controller VM and the KVM hypervisor at the factory

before shipping each node to a customer. To use a different hypervisor (ESXi or Hyper-V) on factory nodes

or to use any hypervisor on bare metal nodes, the nodes must be imaged in the field. This guide provides

step-by-step instructions on how to image nodes (install a hypervisor and then the NOS Controller VM)

after they have been physically installed at a site.

Note: Only Nutanix sales engineers, support engineers, and partners are authorized to perform

a field installation. Field installation can be used to cleanly install new nodes (blocks) in a cluster 

or to install a different hypervisor on a single node. It should not be used to upgrade the

hypervisor or switch hypervisors of nodes in an existing cluster. (You can use Foundation to

re-image nodes in an existing cluster that you no longer want by first destroying the cluster.)

Imaging Nodes

 A field installation can be performed for a cluster (that is multiple nodes which can be configured as one or 

more clusters) or a single node.

Summary: Imaging a Cluster

Details of these steps are in Imaging a Cluster  on page 14.

1. Set up the installation environment as follows:

a. Connect the Ethernet ports on the nodes to a switch.

b. Download Foundation (multi-node installation tool), Phoenix (Nutanix Installer ISO), and hypervisor 

ISO image files to a workstation. When installing ESXi or Hyper-V, the customer must provide the

hypervisor ISO image file.

c. Install Oracle VM VirtualBox on the workstation.

2. Open the Foundation GUI on the workstation and configure the following:

a. Enter IPMI, hypervisor, and CVM address and credential information.

b. Select the Phoenix and hypervisor ISO image files to use.

c. Start the imaging process and monitor progress.

Page 7: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 7/41

Overview | Field Installation Guide | Foundation | 7

Summary: Imaging a Node

Details of these steps are in Imaging a Node on page 25.

1. Download the Phoenix and hypervisor ISO image files to a workstation.

2. Sign into the IPMI web console for that node, attach the hypervisor ISO image file, provide required

node information, and then restart the node.

3. Repeat step 2 for the Phoenix ISO image file.

Supported Hypervisors

This table lists the hypervisor releases that can be installed on Nutanix models through this method.

Model (Series) ESXi1

Hyper-V2

KVM3

NX-1000 ● ● ●

NX-2000

NX-3000

NX-3050 ● ● ●

NX-6000 ● ● ●

NX-7000 ● (5.1 or later only) ●

(1) The supported ESXi releases are 5.0 U2 and U3, 5.1 U1 and U2, and 5.5. (2) The supported Hyper-V

release is Server 2012 R2. (3) KVM support is transparent because it is included automatically as part of 

the Phoenix ISO.

Page 8: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 8/41

Preparing Installation Environment | Field Installation Guide | Foundation | 8

2

Preparing Installation Environment

Imaging a cluster in the field requires first installing certain tools and setting the environment to run those

tools.

Video: Click here to see a video demonstration of this procedure (MP4 format). This demonstrates

the procedure for Foundation release 1.1. Some steps in release 1.1 differ from the current

procedure described here.

Installation is performed from a workstation (laptop or desktop machine) with access to the IPMI interfaces

of the nodes in the cluster. Configuring the environment for installation requires setting up network

connections, installing Oracle VM VirtualBox on the workstation, downloading ISO images, and using

VirtualBox to configure various parameters. To prepare the environment for installation, do the following:

1. Connect the first 1GbE network interface of each node (middle RJ-45 interface) to a 1GbE Ethernet

switch. The IPMI LAN interfaces of the nodes must be in failover mode (factory default setting).

Figure: Port Locations (NX-3050)

Note: You can connect to either a managed switch (routing tables) or a flat switch (no routing

tables). A flat switch is often recommended to protect against configuration errors that could

affect the production environment. Foundation includes a multi-homing feature that allows you

to image the nodes using production IP addresses despite being connected to a flat switch (see

Imaging a Cluster  on page 14).

2. Connect the installation workstation (laptop or desktop machine used for this installation) to the same

1GbE switch as the nodes.

The installation workstation requires at least 3 GB of memory (Foundation VM size plus 1 GB), 25 GB of 

disk space (preferably SSD), and a physical (wired) network adapter.

3. Go to the Foundation portal (see Foundation Portal  on page 36) and download the following files to a

temporary directory on the installation workstation.

•   Foundation_VM-version# .ovf. This is the Foundation VM OVF configuration file for the version# 

release, for example Foundation_VM-1.2.ovf .

Page 9: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 9/41

Preparing Installation Environment | Field Installation Guide | Foundation | 9

•   Foundation_VM-version# -disk1.vmdk. This is the Foundation VM VMDK file for the version#  release,

for example Foundation_VM-1.2-disk1.vmdk .

•   VirtualBox-4.3.10-build# -[OSX|Win].[dmg|exe] . This is the Oracle VM VirtualBox installer for 

Mac OS (VirtualBox-4.3.10-build# -OSX.dmg) or Windows (VirtualBox-4.3.10-build# -Win.exe).

Oracle VM VirtualBox is a free open source tool used to create a virtualized environment on the

workstation.

4. Open the Oracle VM VirtualBox installer and install Oracle VM VirtualBox using the default options.

See the Oracle VM VirtualBox User Manual  for installation and start up instructions (https:// 

www.virtualbox.org/wiki/Documentation).

5. Create a new folder called VirtualBox VMs in your home directory.

On a Windows system, this is typically C:\Users\user_name\VirtualBox VMs.

6. Copy the Foundation_VM-version# .ovf and Foundation_VM-version# -disk1.vmdk files to the VirtualBox

VMs folder that you created in step 5.

7. Start Oracle VM VirtualBox.

Figure: VirtualBox Welcome Screen

8. Click the File option of the main menu and then select Import Appliance from the pull-down list.

9. Find and select the Foundation_VM-version# .ovf file, and then click Next.

10. Click the Import button.

11. In the left column of the main screen, select Foundation_VM-version#  and click Start.

The Foundation VM console launches and the VM operating system boots.

12.  At the login screen, login as the Nutanix user with the password nutanix/4u.

The Foundation VM desktop appears (after it loads).

13. If you want to enable file drag-and-drop functionality between your workstation and the Foundation VM,

install Oracle Additions as follows:

a. On the VirtualBox window for the Foundation VM, select Devices > Insert Guest Additions CD

Image... from the menu.

 A VBOXADDITIONS CD entry appears on the Foundation VM desktop.

Page 10: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 10/41

Preparing Installation Environment | Field Installation Guide | Foundation | 10

b. Click OK when prompted to Open Autorun Prompt and then click Run.

c. Enter the root password (nutanix/4u) and then click Authenticate.

d.  After the installation is complete, press the return key to close the VirtualBox Guest Additions

installation window.

e. Right-click the VBOXADDITIONS CD entry on the desktop and select Eject.

f. Reboot the Foundation VM by selecting System > Shutdown... > Restart from the Linux GUI.

Note:  A reboot is necessary for the changes to take effect.

g.  After the Foundation VM reboots, select Devices > Drag 'n' Drop > Bidirectional from the menu on

the VirtualBox window for the Foundation VM.

14. Open a terminal session and run the ifconfig command to determine if the Foundation VM was able to

get an IP address from the DHCP server.

If the Foundation VM has a valid IP address, skip to the next step. Otherwise, configure a static IP as

follows:

Note: Normally, the Foundation VM needs to be on a public network in order to copy selected

ISO files to the Foundation VM in the next two steps. This might require setting a static IP

address now and setting it again when the workstation is on a different (typically private)

network for the installation (see Imaging a Cluster  on page 14).

a. Double click the set_foundation_ip_address  icon on the Foundation VM desktop.

Figure: Foundation VM: Desktop

b. In the pop-up window, click the Run in Terminal button.

Page 11: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 11/41

Preparing Installation Environment | Field Installation Guide | Foundation | 11

Figure: Foundation VM: Terminal Window 

c. In the Select Action box in the terminal window, select Device Configuration.

Note: Selections in the terminal window can be made using the indicated keys only. (Mouse

clicks do not work.)

Figure: Foundation VM: Action Box 

d. In the Select a Device box, select eth0.

Figure: Foundation VM: Device Configuration Box 

e. In the Network Configuration box, remove the asterisk in the Use DHCP field (which is set by

default), enter appropriate addresses in the Static IP, Netmask, and Default gateway IP fields, and

then click the OK button.

Page 12: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 12/41

Preparing Installation Environment | Field Installation Guide | Foundation | 12

Figure: Foundation VM: Network Configuration Box 

f. Click the Save button in the Select a Device box and the Save & Quit button in the Select Action

box.

This saves the configuration and closes the terminal window.

15. Copy the desired Phoenix ISO image file from the Foundation portal to the /home/nutanix/foundation/

isos/phoenix folder.

Phoenix is the name of another installation tool used in this process. There is a Phoenix ISO imagefile for each supported NOS release. See the Phoenix Releases section in Foundation Portal  on

page 36 for a list of the available Phoenix ISO images.

Caution: Phoenix release 1.2 is the earliest supported release; do not use a Phoenix ISO

image from an earlier release.

16. Download the desired hypervisor ISO image (ESXi or Hyper-V) to the /home/nutanix/foundation/isos/

hypervisor folder.

Customers must provide the ESXi or Hyper-V ISO image; it is not provided by Nutanix. KVM is included

in the Phoenix ISO, so a separate KVM ISO is not required. Check with your VMware or Microsoft

representative, or download an ISO image from a VMware or Microsoft support site:

• VMware: http://www.vmware.com/support.html • Microsoft (Hyper-V free): http://technet.microsoft.com/en-us/evalcenter/dn205299.aspx 

• MSDN (subscription): http://msdn.microsoft.com/subscriptions/downloads/#FileId=57052 

The following table lists the supported hypervisor images.

Hypervisor ISO Images

Hypervisor

Version

File Name MD5 Sum

ESXi 5.0 U2 VMware-VMvisor-

Installer-5.0.0.update02-914586.x86_64.iso

fa6a00a3f0dd0cd1a677f69a236611e2

ESXi 5.0 U3 VMware-VMvisor-

Installer-5.0.0.update03-1311175.x86_64.iso

391496b995db6d0cf27f0cf79927eca6

ESXi 5.1 U1 VMware-VMvisor-

Installer-5.1.0.update01-1065491.x86_64.iso

2cd15e433aaacc7638c706e013dd673a

ESXi 5.1 U2 VMware-VMvisor-

Installer-5.1.0.update02-1483097.x86_64.iso

6730d6085466c513c04e74a2c2e59dc8

Page 13: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 13/41

Preparing Installation Environment | Field Installation Guide | Foundation | 13

Hypervisor

Version

File Name MD5 Sum

ESXi 5.5 VMware-VMvisor-

Installer-5.5.0-1331820.x86_64.iso

9aaa9e0daa424a7021c7dc13db7b9409

Windows

Server 

2012 R2

(datacenter)

en_windows_server_2012_r2_vl_x64_ 

dvd_3319595.iso

fb101ed6d7328aca6473158006630a9d

(SHA1: A73FC07C1B9F560F960F1C4A5857FAC062041235)

Windows

Server 

2012 R2

(datacenter)

SW_DVD9_Windows_Svr_Std_and_ 

DataCtr_2012_R2_64Bit_English_-3_ 

MLF_X19-53588.ISO

b52450dd5ba8007e2934f5c6e6eda0ce

Windows

Server 2012

R2 (free)

9600.16384.WINBLUE_RTM.130821-1623_ 

X64FRE_SERVERHYPERCORE_EN-US-

IRM_SHV_X64FRE_EN-US_DV5.ISO

9c9e0d82cb6301a4b88fd2f4c35caf80

Page 14: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 14/41

Imaging a Cluster | Field Installation Guide | Foundation | 14

3

Imaging a Cluster

This procedure describes how to install a selected hypervisor and the NOS Controller VM on all the new

nodes in a cluster from an ISO image on a workstation.

Before you begin:

• Physically install the Nutanix cluster at your site. See the Physical Installation Guide for your model type

for installation instructions.

• Set up the installation environment (see Preparing Installation Environment  on page 8).

Note: If you changed the boot device order in the BIOS to boot from a USB flash drive, you will

get a Foundation timeout error if you do not change the boot order back to virtual CD-ROM in

the BIOS.

Note: If STP (spanning tree protocol) is enabled, it can cause Foundation to timeout during the

imaging process. Therefore, disable STP before starting Foundation.

• Have ready the appropriate naming, IP address, and netmask information needed for installation. You

can use the following table to record the information prior to installation.

Note: The Foundation IP address set previously assumed a public network in order to

download the appropriate files. If you are imaging the cluster on a different (typically private)

network in which the current address is no longer correct, repeat step 15 in Preparing 

Installation Environment  on page 8 to configure a new static IP address for the Foundation VM.

Installation Parameter Values

Parameter Value

Global Parameters

IPMI netmask

IPMI gateway (IP address)

IPMI username (default is ADMIN)

IPMI password (default is ADMIN)

Hypervisor netmask

Hypervisor gateway

Hypervisor name server (DNS server IP address)

CVM (Controller VM) netmask

CVM gateway

CVM memory (16 GB by default)

Foundation VM Parameters

IPMI IP address

Page 15: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 15/41

Imaging a Cluster | Field Installation Guide | Foundation | 15

Parameter Value

hypervisor IP address

CVM IP address

Node-Specific Parameters

Starting IP address for IPMI address range

Starting IP address for hypervisor address range

Starting IP address for CVM address range

To install the hypervisor and Controller VM on the cluster nodes, do the following:

Video: Click here to see a video demonstration of this procedure (MP4 format).

1. Click the Nutanix Foundation icon on the Foundation VM desktop to start the Foundation GUI.

Note: See Preparing Installation Environment  on page 8 if Oracle VM VirtualBox is not started

or the Foundation VM is not running currently. You can also start the Foundation GUI by

opening a web browser and entering http://localhost:8000/gui/index.html .

Figure: Foundation VM Desktop

The Foundation screen appears. The screen contains three sections: global parameters at the top,

node information in the middle, and ISO image information at the bottom. Upon opening the Foundation

screen, Foundation begins searching the network for unconfigured Nutanix nodes and displays

information in the middle section about the discovered nodes. The discovery process can take several

minutes (or longer) if there are many nodes on the network. Wait for the discovery process to complete

before proceeding.

Note: Foundation discovers unconfigured nodes only. If you are running Foundation on a

preconfigured block with an existing cluster and you want Foundation to image those nodes,

you must first destroy the existing cluster in order for Foundation to discover those nodes.

Note: To display the help documentation in a separate browser tab or window, select Help

from the gear icon pull-down list at the top right of the screen. (Select About to display

the Foundation version.) You need Internet access to view the help documentation. If you

cannot access the help contents, either view Foundation from your host browser if it has

Internet access or copy the help link URL to a browser on any system with Internet access.

Page 16: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 16/41

Imaging a Cluster | Field Installation Guide | Foundation | 16

Figure: Foundation Screen: Full Screen

2. In the top section of the screen, enter appropriate values in the indicated fields:

Note: The parameters in this section are global and will apply to all the discovered nodes.

Figure: Foundation Screen: Global Parameters

a. IPMI Netmask: Enter the IPMI netmask value.

b. IPMI Gateway: Enter an IP address for the gateway.

c. IPMI Username: Enter the IPMI user name. The default user name is ADMIN.

Page 17: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 17/41

Imaging a Cluster | Field Installation Guide | Foundation | 17

d. IPMI Password: Enter the IPMI password. The default password is ADMIN.

Check the show password box to display the password as you type it.

e. Hypervisor Netmask: Enter the hypervisor netmask value.

f. Hypervisor Gateway: Enter an IP address for the gateway.

g. Hypervisor Name Server : Enter the IP address of the DNS name server.

h. CVM Netmask: Enter the Controller VM netmask value.

i. CVM Gateway: Enter an IP address for the gateway.

 j. CVM Memory: Select a memory size for the Controller VM from the pull-down list.

This field is set initially to default. (The default amount varies according to the node model type.)

The other options allow you to specify a memory size of 16 GB, 24 GB, or 32 GB. The default setting

represents the recommended amount for the model type. Assigning more memory than the default

might be appropriate when using advanced features such as deduplication.

Note: Use the default memory setting unless Nutanix support recommends a different

setting.

3. In the upper middle section of the screen, configure the installation as follows:

Figure: Foundation Screen: Installation Parameters

a. If you are using a flat switch (no routing tables) for installation, check the Multi-Homing box.

The Multi-Homing line appears when the box is checked (and disappears when the box is

unchecked). The purpose of the multi-homing feature is to allow the Foundation VM to configure final

production IP addresses for IPMI, hypervisor, and Controller VM while using an unmanaged switch.

• Enter unique IP addresses for the Foundation VM to use for communicating with IPMI,

hypervisor, and Controller VM components respectively. Make sure that the IPs are on thematching IPMI, hypervisor, and Controller VM subnets configured in the top section of the screen

(step 2).

• If this box is not checked, Foundation requires that either all addresses are on the same subnet

or that the configured IPMI, hypervisor, and Controller VM IP addresses are routable.

b. To create a cluster after imaging the nodes, click the Create cluster  box.

Four Create Cluster  lines appear when the box is checked (and disappear when the box is

unchecked). Enter the following information in the indicated fields:

Page 18: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 18/41

Imaging a Cluster | Field Installation Guide | Foundation | 18

• Name: Enter a cluster name.

• External IP: Enter an external (virtual) IP address for the cluster. This field sets a logical IP

address that will always connect to one of the active Controller VM in the cluster (assuming at

least one is active), which removes the need to enter the address of a specific Controller VM.

This parameter is required for Hyper-V clusters and is optional for ESXi and KVM clusters.

• Max Redundancy Factor : Select a redundancy factor (2 or 3) for the cluster from the pull-down

list. Setting this to 2 means the cluster can tolerate the failure of a single node or drive; setting it

to 3 means it can withstand the failure of 2 nodes or drives in different blocks. Redundancy factor 3 can be enabled only when the cluster is created, and it requires that the cluster have at least

five nodes. (In addition, containers must have replication factor 3 for guest VM data to withstand

the failure of two nodes.) Redundancy factor 3 is available only on NOS release 4.0 or later.

• CVM DNS Servers: Enter the Controller VM DNS server IP address or fully qualified domain

name. Enter a comma separated list for multiple server addresses.

• CVM NTP Servers: Enter the Controller VM NTP server IP address or fully qualified domain

name. Enter a comma separated list for multiple server addresses.

• Hypervisor NTP Servers: Enter the hypervisor NTP server IP address or fully qualified domain

name. Enter a comma separated list for multiple server addresses.

c. To image one or more bare metal nodes (that is, nodes without any NOS software), click the Add

bare metal nodes link.

 An Add Bare Metal line appears when the box is checked (and disappears when the box isunchecked). Enter the following information in the indicated fields:

• How many blocks?: Enter the number of blocks to be added that contain bare metal nodes.

• How many nodes per block?: Enter the number of bare metal nodes in each block.

• Click the Add button to the right of the fields. This will add that number of blocks (and nodes per 

block) to the node listing (see next step).

d. If for any reason you want to look again for unconfigured nodes, click the Retry discovery link.

This repeats the discovery process that occurred when you opened the Foundation GUI.

Note: You can retry discovery and reset all field entries to the default state by selecting

Reset Configuration from the gear icon pull-down list at the top right of the screen.

e. To check which IPMI IP addresses are active and reachable, click the Ping Scan link.

This does a ping test to each IP address in the IPMI, hypervisor, and CVM IP columns (see next

step). A (success) or (failure) icon appears next to that field to indicate the ping test result for 

each node. This feature is most useful when imaging a previously unconfigured set of nodes. None

of the selected IPs should be pingable. Successful pings usually indicate a conflict with the existing

infrastructure.

Note: When re-imaging a configured set of nodes using the same network configuration,

failure to ping indicates a networking issue.

4. In the lower middle section of the screen, configure the nodes as follows:This section displays information about the discovered nodes. The size of this section varies and can

be quite large when many blocks are discovered. It includes columns for the block ID, node, IPMI

Mac address, IPMI IP address, hypervisor IP address, CVM IP address, and hypervisor host name. A

section is displayed for each discovered block with lines for each node in that block. If you added bare

metal blocks in the previous step, those blocks also appear.

Page 19: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 19/41

Imaging a Cluster | Field Installation Guide | Foundation | 19

Figure: Foundation Screen: Node Parameters

a. If there are discovered blocks you do not want to image, uncheck the box in the Block ID column for 

those blocks.

 All discovered blocks (and bare metal blocks) are checked by default. Foundation will image all

checked blocks. You can exclude individual nodes by unchecking the box in the Node field for those

nodes. To exclude and remove a block from the display, click the red X on the far right.

b. If you configured bare metal nodes, enter the MAC address of the IPMI interface for each node in

the IPMI MAC Address field.This field is editable for bare metal nodes only; a value of N/A  appears for all other nodes. The MAC

address of the IPMI interface normally appears on a label on the back of each node. (Make sure you

enter the MAC address from the label that starts with "IPMI:", not the one that starts with "LAN:".)

The MAC address appears in the standard form of six two-digit hexadecimal numbers separated by

colons, for example 00:25:90:D9:01:98.

Caution:  Any existing data on the node will be destroyed during imaging. If you are using

the bare metal option to re-image a previously used node, do not proceed until you have

saved all the data on the node that you want to keep.

Figure: IPMI MAC Address Label 

c. Do one of the following in the IPMI IP column:

• To specify the IPMI addresses manually, go to the line for each node and enter (or update) the IP

address in that field.

• To specify the IPMI addresses automatically, enter a starting IP address in the top line of the IPMI

IP column. The entered address is assigned to the IPMI port of the first node, and consecutive

IP addresses (starting from the entered address) are assigned automatically to the remainingnodes. Discovered nodes are sorted first by block ID and then by position, so IP assignments are

sequential. If you do not want all addresses to be consecutive, you can change the IP address for 

specific nodes by updating the address in the appropriate fields for those nodes.

Note:  Automatic assignment is not used for addresses ending in 0, 1, 254, or 255,

because such addresses are commonly reserved by network administrators.

d. Repeat the previous step for the Hypervisor IP column.

This sets the hypervisor IP addresses for all the nodes.

Page 20: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 20/41

Imaging a Cluster | Field Installation Guide | Foundation | 20

e. Repeat the previous step for the CVM IP column.

This sets the Controller VM IP addresses for all the nodes.

Caution: The Nutanix high availability features require that both hypervisor and CVM be in

the same subnet. Putting them in different subnets reduces the failure protection provided by

Nutanix.

f. Do one of the following in the Hypervisor Hostname field:• A host name is automatically generated for each host (NTNX-unique_identifier). If these names

are acceptable, do nothing in this field.

Caution: Windows computer names (used in Hyper-V) have a 15 character limit. The

automatically generated names are longer than 15 characters, which would result in the

same truncated name for multiple hosts in a Windows environment. Therefore, do not

use the automatically generated names when the hypervisor is Hyper-V.

• To specify the host names manually, go to the line for each node and enter the desired name in

that field.

• To specify the host names automatically, enter a base name in the top line of the Hypervisor 

Hostname column. The base name with a suffix of "-1" is assigned as the host name of the first

node, and the base name with "-2", "-3", and so on is assigned automatically as the host namesof the remaining nodes. You can specify different names for selected nodes by updating the entry

in the appropriate field for those nodes.

g. If you enabled cluster create (checked the Create cluster  box), do one of the following in the

Cluster Create column:

Note: This field sets which nodes should be included in the cluster. Check boxes appear in

this field only when the Create cluster  box is checked.

• To select nodes individually, check the box for each node to be included in the cluster.

• To select all nodes, check the box at the top of the column. You can de-select a specific node by

unchecking the box for that node.

5. In the bottom section of the screen, do the following:

Figure: Foundation Screen: ISO Image Selection

a. In the Phoenix ISO Image field, select the Phoenix ISO image you downloaded previously from the

pull-down list (see Preparing Installation Environment  on page 8).

Note: Click the Refresh link to display the current list of available images in the ~/

foundation/isos/[phoenix|hypervisor]  folder. If the desired Phoenix or hypervisor ISO

image you downloaded is not listed, it might have been downloaded to the wrong directory

(see Preparing Installation Environment  on page 8).

b. In the Hypervisor ISO Image field, select the hypervisor ISO image you downloaded previously

from the pull-down list (see Preparing Installation Environment  on page 8).

Page 21: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 21/41

Imaging a Cluster | Field Installation Guide | Foundation | 21

Note:  A hypervisor ISO is required to install ESXi or Hyper-V, but KVM is included in the

Phoenix ISO. To use KVM, select KVM (no ISO required) from the pull-down list.

6. When all the fields are correct, click the Run Installation button at the bottom of the screen.

The imaging process begins. Nodes are imaged in parallel, and the imaging process takes about 45

minutes.

Note: Simultaneous processing is limited to a maximum of 20 nodes. If the cluster contains

more than 20 nodes, the total processing time is about 45 minutes for each group of 20 nodes.

First, the IPMI port addresses are configured. If IPMI port addressing is successful, the nodes are

imaged. The IPMI port configuration processing can take several minutes or longer depending on the

size of the cluster. You can watch server progress by clicking on the aggregate progress bar at the top,

which displays the service.log contents in the pane on the right of the screen.

When processing moves to node imaging (and subsequent cluster creation if enabled), the GUI displays

dynamic status messages and a progress bar for each node. A blue bar indicates good progress; a red

bar indicates a problem. Processing messages appear during each stage. Click on the progress bar for 

a node to display the log file for that node (on the right). Click the Refresh link to refresh the displayed

log file contents.

When processing is complete, a green check mark appears next to the node name if IPMI configuration

and imaging (and cluster creation) was successful or a red x appears if it was not. At this point, do one

of the following:

• Status: There is a green check mark next to every node. This means IPMI configuration and imaging

(both hypervisor and NOS Controller VM) across all the nodes in the cluster was successful, and

cluster creation was successful (if enabled).

• Status: At least one node has a red check mark next to the IPMI address field. This means

the installation failed at the IPMI configuration step. To correct this problem, see Fixing IPMI 

Configuration Problems on page 21.

• Status: At least one node has a red c heck mark next to the hypervisor address field. This means

IPMI configuration was successful across the cluster but imaging failed. The default per-node

installation timeout is 30 minutes, so you can expect all the nodes (in each run of up to 20 nodes) to

finish successfully or encounter a problem in that amount of time. To correct this problem, see Fixing 

Imaging Problems on page 22.

Fixing IPMI Configuration Problems

When the IPMI port configuration fails for one or more nodes in the cluster, the installation process

stops before imaging any of the nodes. (Foundation will not go to the imaging step after an IPMI port

configuration failure, but it will try to configure the IPMI port address on all nodes before stopping.) The

installation screen reappears with a red check next to the IPMI port address field for any node that was not

configured successfully. To correct this problem, do the following:

1. Review the displayed addresses for the failed nodes, determine if that address is valid, and change the

IP address in that field if necessary.

Hovering the cursor over the address displays a pop-up message (see figure below) with

troubleshooting information. This can help you diagnose and correct the problem. In addition, check that

the IPMI credentials are correct. See the service.log file (in /home/nutanix/foundation/log ) for more

detailed information.

Page 22: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 22/41

Imaging a Cluster | Field Installation Guide | Foundation | 22

2. When you have corrected all the problems and are ready to try again, click the Configure IPMI button

at the bottom of the screen.

3. Repeat the preceding steps as necessary to fix all the IPMI configuration errors.

4. When all nodes have green check marks in the IPMI address column, click the Image Nodes button at

the bottom of the screen to begin the imaging step.

If you cannot fix the IPMI configuration problem for one or more of the nodes, you can bypass thosenodes and continue to the imaging step for the other nodes by clicking the Proceed button. In this case

you must configure the IPMI port address manually for each bypassed node (see Setting IPMI Static IP 

 Address on page 40).

Figure: Foundation: IPMI Configuration Error 

Fixing Imaging Problems

When imaging fails for one or more nodes in the cluster, the installation screen reappears with a red check

next to the hypervisor address field for any node that was not imaged successfully. To correct this problem,do the following:

1. Review the displayed addresses for the failed nodes, determine if that address is valid, and change the

IP address in that field if necessary.

Hovering the cursor over the address displays a pop-up message with troubleshooting information. This

can help you diagnosis and correct the problem.

2. When you have corrected the problems and are ready to try again, click the Proceed button at the

bottom of the screen.

The GUI displays dynamic status messages and a progress bar for each node during imaging (see

Imaging a Cluster  on page 14.

3. Repeat the preceding steps as necessary to fix all the imaging errors.

If you cannot fix the imaging problem for one or more of the nodes, you can image those nodes one at a

time (see Imaging a Node on page 25).

In the following example, a node failed to image successfully because it exceeded the

installation timeout period. (This was because the IPMI port cable was disconnected during

Page 23: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 23/41

Imaging a Cluster | Field Installation Guide | Foundation | 23

installation.) The progress bar turned red and a message about the problem was written to

the log.

Figure: Foundation Screen: Imaging Problem (progress screen)

Clicking the Back to Configuration link at the top redisplays the original Foundation

screen updated to show 192.168.20.102 failed to image successfully. After fixing theproblem, click the Image Nodes button to image that node again. (You can also retry

imaging by clicking the Retry Imaging Failed Nodes link at the top of the status bar 

page.)

Figure: Foundation Screen: Imaging Problem (configuration screen)

The imaging process starts again for the failed node(s).

Page 24: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 24/41

Imaging a Cluster | Field Installation Guide | Foundation | 24

Figure: Foundation Screen: Imaging Problem (retry screen)

Cleaning Up After Installation

Some information persists after imaging a cluster using Foundation. If you want to use the same

Foundation VM to image another cluster, the persistent information must be removed before attempting

another installation.

To remove the persistent information after an installation, click the Reset Configuration button in the

upper right of the screen.

Clicking this button reinitializes the progress monitor, destroys the persisted configuration data, and

returns the Foundation environment to a fresh state.

Figure: Foundation Screen: Reset Configuration

Page 25: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 25/41

Imaging a Node | Field Installation Guide | Foundation | 25

4

Imaging a Node

This procedure describes how to install the NOS Controller VM and selected hypervisor on a new or 

replacement node from an ISO image on a workstation (laptop or desktop machine).

Before you begin: If you are adding a new node, physically install that node at your site. See the Physical 

Installation Guide for your model type for installation instructions.

Imaging a new or replacement node can be done either through the IPMI interface (network connection

required) or through a direct attached USB (no network connection required). In either case the installation

is divided into two steps:

1. Install the desired hypervisor version (see Installing a Hypervisor  on page 25).

2. Install the NOS Controller VM and provision the hypervisor (see Installing t he Controller VM on

page 33).

Installing a Hypervisor

This procedure describes how to install a hypervisor on a single node in a cluster in the field.

Note: This procedure is for ESXi or Hyper-V only. It is not needed for KVM, because KVM is

included in the Phoenix ISO (see Installing the Controller VM on page 33).

To install a hypervisor on a new or replacement node in the field, do the following:

1. Connect the IPMI port on that node to the network.

 A 1 or 10 GbE port connection is not required for imaging the node.

2.  Assign an IP address (static or DHCP) to the IPMI interface on the node.

To assign a static address, see Setting IPMI Static IP Address on page 40.

3. Download the desired hypervisor ISO image (ESXi or Hyper-V) to a temporar y folder on a workstation.

Customers must provide the ESXi or Hyper-V ISO image; it is not provided by Nutanix. Check with your 

VMware or Microsoft representative, or download an ISO image from a VMware or Microsoft support

site:

• VMware: http://www.vmware.com/support.html • Microsoft (Hyper-V free): http://technet.microsoft.com/en-us/evalcenter/dn205299.aspx 

• MSDN (subscription): http://msdn.microsoft.com/subscriptions/downloads/#FileId=57052 

The following table lists the supported hypervisor images.

Page 26: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 26/41

Imaging a Node | Field Installation Guide | Foundation | 26

Hypervisor ISO Images

Hypervisor

Version

File Name MD5 Sum

ESXi 5.0 U2 VMware-VMvisor-

Installer-5.0.0.update02-914586.x86_64.iso

fa6a00a3f0dd0cd1a677f69a236611e2

ESXi 5.0 U3 VMware-VMvisor-Installer-5.0.0.update03-1311175.x86_64.iso

391496b995db6d0cf27f0cf79927eca6

ESXi 5.1 U1 VMware-VMvisor-

Installer-5.1.0.update01-1065491.x86_64.iso

2cd15e433aaacc7638c706e013dd673a

ESXi 5.1 U2 VMware-VMvisor-

Installer-5.1.0.update02-1483097.x86_64.iso

6730d6085466c513c04e74a2c2e59dc8

ESXi 5.5 VMware-VMvisor-

Installer-5.5.0-1331820.x86_64.iso

9aaa9e0daa424a7021c7dc13db7b9409

Windows

Server 

2012 R2

(datacenter)

en_windows_server_2012_r2_vl_x64_ 

dvd_3319595.iso

fb101ed6d7328aca6473158006630a9d

(SHA1: A73FC07C1B9F560F960F1

C4A5857FAC062041235)

Windows

Server 

2012 R2

(datacenter)

SW_DVD9_Windows_Svr_Std_and_ 

DataCtr_2012_R2_64Bit_English_-3_ 

MLF_X19-53588.ISO

b52450dd5ba8007e2934f5c6e6eda0ce

Windows

Server 2012

R2 (free)

9600.16384.WINBLUE_RTM.130821-1623_ 

X64FRE_SERVERHYPERCORE_EN-US-

IRM_SHV_X64FRE_EN-US_DV5.ISO

9c9e0d82cb6301a4b88fd2f4c35caf80

4. Open a Web browser to the IPMI IP address of the node to be imaged.

5. Enter the IPMI login credentials in the login screen.

The default value for both user name and password is ADMIN (upper case).

Figure: IPMI Console Login Screen

The IPMI console main screen appears.

Page 27: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 27/41

Imaging a Node | Field Installation Guide | Foundation | 27

Figure: IPMI Console Screen

6. Select Console Redirection from the Remote Console drop-down list of the main menu, and then

click the Launch Console button.

Figure: IPMI Console Menu

7. Select Virtual Storage from the Virtual Media drop-down list of the remote console main menu.

Figure: IPMI Remote Console Menu (Virtual Media)

8. Click the CDROM&ISO tab in the Virtual Storage display and then select ISO File from the Logical

Drive Type field drop-down list.

Figure: IPMI Virtual Storage Screen

9. In the browse window, go to where the hypervisor ISO image was downloaded, select that file, and then

click the Open button.

10. In the remote console main menu, select Set Power Reset in the Power Control drop-down list.

This causes the system to reboot using the selected hypervisor image.

Page 28: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 28/41

Imaging a Node | Field Installation Guide | Foundation | 28

Figure: IPMI Remote Console Menu (Power Control)

What to do next: Complete installation by following the steps for the hypervisor:

• Installing ESXi  on page 28

• Installing Hyper-V  on page 29

Installing ESXi

Before you begin: Complete Installing a Hypervisor  on page 25.

1. Click Continue at the installation screen and then accept the end user license agreement on the next

screen.

Figure: ESXi Installation Screen

2. In the Select a Disk page, select the SATADOM as the storage device, click Continue, and then click

OK in the confirmation window.

Figure: ESXi Device Selection Screen

3. In the keyboard layout screen, select a layout (such as US Default) and then click Continue.

4. In the root password screen, enter nutanix/4u as the root password.

Note: The root password must be nutanix/4u or the installation will fail.

5. Review the information on the Install Confirm screen and then click Install.

Page 29: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 29/41

Imaging a Node | Field Installation Guide | Foundation | 29

Figure: ESXi Installation Confirmation Screen

The installation begins and a dynamic progress bar appears.

6. When the Installation Complete screen appears, go back to the Virtual Storage screen (see step 9),

click the Plug Out button, and then return to the Installation Complete screen and click Reboot.

What to do next: After the system reboots, you can install the NOS Controller VM and provision the

hypervisor (see Installing the Controller VM on page 33).

Installing Hyper-V

Before you begin: Complete Installing a Hypervisor  on page 25.

1. Press any key when the Press any key to boot from CD or DVD prompt appears.

2. Select Windows Setup [EMS Enabled] in the Windows Boot Manager screen.

Figure: Windows Boot Manager Screen

3. In the language selection screen, simply click the Next button.

Figure: Hyper-V Language Screen

4. In the installation screen, select the Repair your computer  option.

Page 30: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 30/41

Imaging a Node | Field Installation Guide | Foundation | 30

Note: Do not click the Install now button. It will be used later in the procedure.

Figure: Hyper-V Installation Screen

5. In the choose an option screen, select Troubleshoot.

Figure: Hyper-V Choose Option Screen

6. In the advanced options screen, select Command Prompt.

Figure: Hyper-V Advanced Options Screen

7. Partition and format the DOM.

a. Start the disk partitioning utility.

diskpart

Page 31: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 31/41

Imaging a Node | Field Installation Guide | Foundation | 31

b. Find the disk in the displayed list that is about 60 GB (only one disk will be that size). Select that disk

and then run the clean command:

select disk numberclean

c. Create and format a primary partition (size 1024 and file system fat32).

create partition primary size=1024select partition 1format fs=fat32 quick

d. Create and format a second primary partition (default size and file system ntfs).

create partition primaryselect partition 2format fs=ntfs quick

e.  Assign the drive letter "C" to the DOM install partition volume.

list volume

This displays a table of logical volumes and their associated drive letter, size, and file system type.

Locate the volume with an NTFS file system and size of approximately 50 GB. If this volume (which

is the DOM install partition) is drive letter "C", go to the next step.

Otherwise, do one of the following:

• If drive letter "C" is assigned currently to another volume, enter the following commands to

remove the current "C" drive volume and reassign "C" to the DOM install partition volume:

select volume cdrive_volume_id# removeselect volume dom_install_volume_id# assign letter=c

• If drive letter "C" is not assigned currently, enter the following commands to assign "C" to the

DOM install partition volume:

select volume dom_install_volume_id# assign letter=c

f. Exit the diskpart utility.

exit

8. Start the server setup utility.

> setup.exe

9. The language selection screen reappears. Again, just click the Next button.

10. The install screen reappears. This time click the Install now button.

11. In the operating system screen, select Windows Server 2012 Datacenter (Server Core Installation)and then click the Next button.

Page 32: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 32/41

Page 33: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 33/41

Imaging a Node | Field Installation Guide | Foundation | 33

Figure: Hyper-V Progress Screen

15.  After the installation is complete, manually boot the host.

16.  After Windows boots up, click Ctrl-Alt-Delete and then log in as Administrator when prompted.

17. Change your password when prompted to nutanix/4u.

18. Install the NOS Controller VM and provision the hypervisor (see Installing the Controller VM on

page 33).

19. Open a command prompt and enter the following two commands:

> schtasks /create /sc onstart /ru Administrator /rp "nutanix/4u" /tn ` firstboot /tr D:\firstboot.bat

> shutdown /r /t 0

This causes a reboot and the firstboot script to run, after which the host will reboot two more times.

This process can take substantial time (possibly 15 minutes) without any progress indicators. To monitor 

progress, log into the VM after the initial reboot and enter the command notepad D:\first_boot.log .

This displays a (static) snapshot of the log file. Repeat this command as desired to see an updated

version of the log file.

Note:  A d:\firstboot_fail  file appears when this process fails. If that file is not present, the

process is continuing (if slowly).

Installing the Controller VM

This procedure describes how to install the NOS Controller VM and provision the previously installed

hypervisor on a single node in a cluster in the field.

Before you begin: Install a hypervisor on the node (see Installing a Hypervisor  on page 25).

To install the Controller VM (and provision the hypervisor) on a new or replacement node, do the following:

1. Copy the appropriate Phoenix ISO image file from the Foundation portal (see Foundation Portal  on

page 36) to a temporary folder on the workstation. (You can download it to the same folder as the

hypervisor ISO image .)

Phoenix is the name of the installation tool used in this process. There is a Phoenix ISO image file for 

each supported NOS release. See the Phoenix Releases section in Foundati on Portal  on page 36for a list of the available Phoenix ISO images.

Caution: Phoenix release 1.2 is the earliest supported release; do not use a Phoenix ISO

image from an earlier release.

2. In the IPMI web console, attach the Phoenix ISO to the node as follows:

a. Go to Remote Control and click Launch Console (if it is not already launched).

 Accept any security warnings to start the console.

Page 34: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 34/41

Imaging a Node | Field Installation Guide | Foundation | 34

b. In the console, click Media > Virtual Media Wizard.

c. Click Browse next to ISO Image and select the ISO file.

d. Click Connect CD/DVD.

 

e. Go to Remote Control > Power Control.

f. Select Reset Server  and click Perform Action.

The host restarts from the ISO.

3.  At the prompt enter Y to accept the factory configuration or N if the node position value is not correct.

4. Do the following in the Nutanix Installer configuration screen:

a. Review the values in the Block ID, Node Serial, and Node Cluster ID fields (and Node Model if 

you entered N in the previous step) and update them if they are not correct.

The Hypervisor Type, Hypervisor Version, and Nutanix Software fields cannot be edited.

b. Do one of the following:

• If you are imaging a U-node, select both Clean Install Hypervisor  and Clean Install SVM

• If you are imaging an X-node, select Clean Install Hypervisor  only.

 A U-node is a fully configured node which can be added to a cluster. Both the Controller VM and the

hypervisor must be installed in a new U-node. An X-node does not includes a NIC card or disks; it isthe appropriate model when replacing an existing node. The disks and NIC are transferred from the

old node, and only the hypervisor needs to be installed on the X-node.

Caution: Do not select Clean Install SVM if you are replacing a node (X-node) because

this option cleans the disks as part of the process, which means existing data will be lost.

c. When all the fields are correct, click the Start button.

Page 35: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 35/41

Imaging a Node | Field Installation Guide | Foundation | 35

Installation begins and takes about 20 minutes.

5. In the Virtual Media window, click Disconnect next to CD Media.

6.  At the restart prompt in the console, type Y to restart the node.

The node restarts with the new image. After the node starts, additional configuration tasks run and

then the host restarts again. During this time, the host name is installing-please-be-patient . Wait

approximately 20 minutes until this stage completes before accessing the node.

Caution: Do not restart the host until the configuration is complete.

Page 36: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 36/41

Foundation Portal | Field Installation Guide | Foundation | 36

5

Foundation Portal

The Foundation portal site provides access to many of the files required to do a field installation.

Accessing the Portal

Nutanix maintains a site where you can download Nutanix product releases. To access the Foundation

portal on this site, do the following:

1. Open a web browser and go to http://releases.nutanix.com.

The login page is displayed.

2. Enter your Nutanix account or partner portal credentials to access the site.

The Current NOS Releases page appears.

3. In the pull-down list next to your name (upper right), select Foundation to download Foundation-related

files or Phoenix to download Phoenix-related files.

Figure: NOS Releases Screen

The Foundation (or Phoenix ) releases screen appears.

4. Click the target release link.

Page 37: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 37/41

Foundation Portal | Field Installation Guide | Foundation | 37

Figure: Foundation Releases Screen

The Foundation (or Phoenix ) files screen for that release appears. (For Phoenix, you must first select a

hypervisor before the files screen appears.)

5.  Access or download the desired files from this screen.

Figure: Foundation Files Screen

Foundation Files

The following table describes the files in the foundation-1.2 directory.

File Name Description

VirtualBox-4.3.10- xxxxx -OSX.dmg This is the Oracle VM VirtualBox installer for Mac

OS. (The xxxxxx  part of the name is replaced by a

build number.)

VirtualBox-4.3.10- xxxxx -OSX.md5sum.txt This file contains the associated MD5 hash value to

validate against after downloading.

VirtualBox-4.3.10- xxxxx -Win.exe This is the Oracle VM VirtualBox installer for  

Windows.

VirtualBox-4.3.10- xxxxx -Win.md5sum.txt This file contains the associated MD5 hash value to

validate against after downloading.

Foundation-1.2_VM  subdirectory

Page 38: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 38/41

Foundation Portal | Field Installation Guide | Foundation | 38

File Name Description

Foundation_VM-1.2.ovf This is the Foundation VM OVF configuration file for  

release 1.2.

Foundation_VM-1.2-disk1.vmdk This is the Foundation VM VMDK file for release

1.2.

Foundation_VM-1.2-disk1.md5sum.txt This file contains the associated MD5 hash value to

validate against after downloading.

docs subdirectory

Field_Installation_Guide-v1_2.pdf docs This file is a PDF version of the Field Installation

Guide.

Phoenix Files

The following table describes the files in the phoenix-1.2 and phoenix-1.3 directories. The NOS release

4.0.x files are in the phoenix-1.3 directory, while the NOS release 3.5.x files are in the phoenix-1.2

directory.

Caution: Phoenix release 1.2 is the earliest supported release with Foundation 1.2; do not use an

earlier Phoenix release.

File Name Description

ESXi subdirectory

phoenix-1.3_ESX_NOS-4.0.1-stable.iso This is the NOS release 4.0.1 Phoenix ISO

image for ESXi.

phoenix-1.3_ESX_NOS-4.0.1.md5sum.txt This file contains the associated MD5 hash

value (to validate against after downloading

the ISO file).

phoenix-1.2_ESX_NOS-3.5.4.iso This is the NOS release 3.5.4 Phoenix ISO

image for ESXi.

phoenix-1.2_ESX_NOS-3.5.4.md5sum.txt This file contains the associated MD5 hash

value.

phoenix-1.2_ESX_NOS-3.5.3.1.iso This is the NOS release 3.5.3.1 Phoenix ISO

image for ESXi.

phoenix-1.2_ESX_NOS-3.5.3.1.md5sum.txt This file contains the associated MD5 hash

value.

phoenix-1.2_ESX_NOS-3.5.2.iso This is the NOS release 3.5.2 Phoenix ISO

image for ESXi.

phoenix-1.2_ESX_NOS-3.5.2.md5sum.txt This file contains the associated MD5 hash

value.

phoenix-1.2_ESX_NOS-3.5.1.iso This is the NOS release 3.5.1 Phoenix ISO

image for ESXi.

phoenix-1.2_ESX_NOS-3.5.1.md5sum.txt This file contains the associated MD5 hash

value.

phoenix-1.2_ESX_NOS-3.1.3.1.iso This is the NOS release 3.1.3.1 Phoenix ISO

image for ESXi.

Page 39: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 39/41

Foundation Portal | Field Installation Guide | Foundation | 39

File Name Description

phoenix-1.2_ESX_NOS-3.1.3.1.md5sum.txt This file contains the associated MD5 hash

value.

HyperV subdirectory

phoenix-1.3_HYPERV_NOS-4.0.1-stable.iso This is the NOS release 4.0.1 Phoenix ISO

image for Hyper-V.

phoenix-1.3_HYPERV_NOS-4.0.1-stable.md5sum.txt This file contains the associated MD5 hash

value.

phoenix-1.2_HYPERV_NOS-3.5.4.iso This is the NOS release 3.5.4 Phoenix ISO

image for Hyper-V.

phoenix-1.2_HYPERV_NOS-3.5.4.md5sum.txt This file contains the associated MD5 hash

value.

phoenix-1.2_HYPERV_NOS-3.5.3.1.iso This is the NOS release 3.5.3.1 Phoenix ISO

image for Hyper-V.

phoenix-1.2_HYPERV_NOS-3.5.3.1.md5sum.txt This file contains the associated MD5 hash

value.

KVM subdirectory

phoenix-1.3_KVM_NOS-4.0.1-stable.iso This is the NOS release 4.0.1 Phoenix ISO

image for KVM.

phoenix-1.3_KVM_NOS-4.0.1-stable.md5sum.txt This file contains the associated MD5 hash

value.

phoenix-1.2_KVM_NOS-3.5.4.iso This is the NOS release 3.5.4 Phoenix ISO

image for KVM.

phoenix-1.2_KVM_NOS-3.5.4.md5sum.txt This file contains the associated MD5 hash

value.

phoenix-1.2_KVM_NOS-3.5.3.1.iso This is the NOS release 3.5.3.1 Phoenix ISO

image for KVM.

phoenix-1.2_KVM_NOS-3.5.3.1.md5sum.txt This file contains the associated MD5 hash

value.

phoenix-1.2_KVM_NOS-3.1.3.1.iso This is the NOS release 3.1.3.1 Phoenix ISO

image for KVM.

phoenix-1.2_KVM_NOS-3.1.3.1.md5sum.txt This file contains the associated MD5 hash

value.

Page 40: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 40/41

Setting IPMI Static IP Address | Field Installation Guide | Foundation | 40

6

Setting IPMI Static IP Address

You can assign a static IP address for an IPMI port by resetting the BIOS configuration.

To configure a static IP address for the IPMI port on a node, do the following:

1. Connect a VGA monitor and USB keyboard to the node.

2. Power on the node.

3. Press the Delete key during boot up when prompted to enter the BIOS setup mode.

The BIOS Setup Utility screen appears.

4. Click the IPMI tab to display the IPMI screen.

5. Select BMC Network Configuration and press the Enter  key.

6. Select Update IPMI LAN Configuration, press Enter , and then select Yes in the pop-up window.

7. Select Configuration Address Source, press Enter , and then select Static in the pop-up window.

Page 41: Nutanix Field Installation Guide-V1 2

7/17/2019 Nutanix Field Installation Guide-V1 2

http://slidepdf.com/reader/full/nutanix-field-installation-guide-v1-2 41/41

8. Select Station IP Address, press Enter , and then enter the IP address for the IPMI port on that node in

the pop-up window.

9. Select Subnet Mask, press Enter , and then enter the corresponding submask value in the pop-up

window.

10. Select Gateway IP Address, press Enter , and then enter the IP address for the node's network

gateway in the pop-up window.

11. When all the field entries are correct, press the F4 key to save the settings and exit the BIOS setup

mode.