nec express5800/a1080a-e server

44
NEC Express5800/A1080a-E Server VMware vSphere 5 Best Practices October 2011

Upload: others

Post on 19-Jun-2022

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: NEC Express5800/A1080a-E Server

NEC Express5800/A1080a-E Server

VMware vSphere 5 Best Practices

October 2011

Page 2: NEC Express5800/A1080a-E Server

2

PROPRIETARY NOTICE AND LIABILITY DISCLAIMER

The information disclosed in this document, including all designs and related materials, is the valuable property of NEC, Inc. (NEC) and/or its licensers. NEC and/or its licensers, as appropriate, reserve all patent, copyright and other proprietary rights to this document, including all design, manufacturing, reproduction, use, and sales rights thereto, except to the extent said rights are expressly granted to others.

The NEC product(s) discussed in this document are warrantied in accordance with the terms of the Warranty Statement accompanying each product. However, actual performance of each such product is dependent upon factors such as system configuration, customer data, and operator control. Since implementation by customers of each product may vary, the suitability of specific product configurations and applications must be determined by the customer and is not warranted by NEC.

To allow for design and specification improvements, the information in this document is subject to change at any time, without notice. Reproduction of this document or portions thereof without prior written approval of NEC is prohibited.

NEC is a registered trademark, and NEC Express5800 is a trademark of NEC Corporation.

Intel and Xeon are trademarks or registered trademarks of Intel® Corporation.

vCenter, vMotion, VMware, and vSphere are trademarks or registered trademarks of VMware. Aptio and American Megatrends are trademarks or registered trademarks of American Megatrends, Inc.

All other product, brand, or trade names used in this publication are the trademarks or registered trademarks of their respective trademark owners.

Page 3: NEC Express5800/A1080a-E Server

3

Table of Contents

NEC Express5800/A1080a-E Server ......................................................................................................................... 1

VMware vSphere 5 Best Practices ........................................................................................................................... 1

Table of Contents .................................................................................................................................................... 3

Introduction............................................................................................................................................................. 5

Why NEC and VMware? ....................................................................................................................................... 5

Overview of the NEC/VMware solution .................................................................................................................. 5

Overview of NEC Express5800/A1080a ............................................................................................................... 6

Large scale virtualization with the NEC Express5800/A1080a......................................................................... 6

Scalable box design and architecture .............................................................................................................. 9

The Intel Xeon processor E7 family and its features...................................................................................... 11

Overview of VMware vSphere 5 ........................................................................................................................ 13

New features in VMware vSphere 5 .............................................................................................................. 13

Review of existing vSphere features from previous versions ........................................................................ 14

The NEC Express5800/A1080a-E ........................................................................................................................... 15

Management features ....................................................................................................................................... 15

Web console features .................................................................................................................................... 15

BIOS options and best practices ........................................................................................................................ 18

Internal storage controller and disk setup ......................................................................................................... 21

Internal storage controller and disk best practices ........................................................................................... 24

VMware General considerations ........................................................................................................................... 25

vSphere 5 and the NEC 5800/A1080a-E: General considerations ..................................................................... 25

ESXi: General considerations ............................................................................................................................. 25

Guest OS: General considerations ..................................................................................................................... 26

CPU best practices ................................................................................................................................................. 27

vSphere 5 and the NEC 5800/A1080a-E: CPU best practices ............................................................................ 27

ESXi: CPU best practices ..................................................................................................................................... 27

Guest OS: CPU best practices ............................................................................................................................. 28

Memory best practices .......................................................................................................................................... 28

Storage best practices ........................................................................................................................................... 29

vSphere 5 and the NEC 5800/A1080a-E: Storage best practices ....................................................................... 29

Page 4: NEC Express5800/A1080a-E Server

4

ESXi: Storage best practices ............................................................................................................................... 30

Guest OS: Storage best practices ....................................................................................................................... 30

Networking best practices..................................................................................................................................... 31

vSphere 5 and the NEC 5800/A1080a-E: Networking best practices ................................................................ 31

ESXi: Networking best practices ........................................................................................................................ 32

Guest OS: Networking best practices ................................................................................................................ 32

VMware vCenter Server and resource management tools: Best practices ....................................................... 32

VMware vCenter Server ................................................................................................................................. 33

VMware vSphere Client ................................................................................................................................. 34

Recommendations for VM resource management ....................................................................................... 34

VMware vMotion, VMware Storage vMotion, VMware High Availability, and VMware Fault Tolerance .... 36

VMware Distributed Resource Scheduler and VMware Distributed Power Management ........................... 40

vCenter Update Manager .............................................................................................................................. 43

Summary ............................................................................................................................................................ 44

Page 5: NEC Express5800/A1080a-E Server

5

Introduction The NEC Express5800/A1080a (GX) series is NEC’s fifth generation of enterprise server

architecture. Using its years of mainframe experience and technology and the unique

perspective that brings, NEC has designed this high-performing x86 server with large

enterprises in mind.

The NEC Express5800/A1080a-E features the new Intel Xeon processor E7-8800/4800

families of processors. Thanks to a number of features and improvements, these

processors provide better performance and power savings than earlier processor

models.

Pairing the Express5800/A1080a-E with the newest version of VMware’s flagship

hypervisor product, vSphere 5 makes for an outstanding solution to enterprise

virtualization needs.

In this guide, we present best practices for the NEC Express5800/A1080a-E and

vSphere 5 solution and the best practices you should consider for this particular

pairing. Our emphasis is on the eight-processor configuration, which best meets the

needs of large enterprises with dense virtualization requirements.

Why NEC and VMware?

Explosive data growth, commodity hardware, green computing, and virtualization

improvements have led to a recent trend of greater VM densities per physical host.

Higher VM densities allow companies to more fully utilize the hardware they have

through massive consolidation efforts. This server consolidation through dense

virtualization minimizes the number of hardware devices and reduces overall power

usage, saving the company money in management man-hours, electricity, and

hardware replacement costs.

However, high VM densities require new servers with large CPU and RAM capacities.

The NEC Express5800/A1080a is a unique and ideal solution to this real-world problem.

The large number of logical processors provides a vast pool of computing resources for

dozens to hundreds of VMs. The 2TB RAM capacity provides an enormous memory

capacity.

The combination of the NEC Express5800/A1080a family of servers with VMware

vSphere 5 provides a perfect path to dense virtualization.

Overview of the NEC/VMware solution NEC and VMware have a long-standing relationship, teaming up to provide enterprise

solutions that not only lower costs through consolidation, but also improve

performance and reliability. This NEC Express5800/A1080a server series complements

Page 6: NEC Express5800/A1080a-E Server

6

the capabilities of VMware by adding an expansive hardware scalability element.

Capable of running up to eight high-end Intel Xeon processor E7 series processors and

2 TB of RAM, the NEC Express5800/A1080a pushes VMware vSphere 5 to its full

capacity and fully supports the virtualization and management features VMware has

enhanced and added in their latest vSphere release. This section outlines specific NEC

and VMware features that make this pairing of hardware and software technology a

powerful combination in the ever-expanding world of virtualization.

Overview of NEC Express5800/A1080a

The NEC Express5800/A1080a

is NEC’s flagship, highly

scalable HA enterprise server.

Boasting a maximum memory

configuration of 2 TB and

eight CPU sockets, the NEC

Express5800/A1080a is both

highly scalable and highly

flexible. Using the new Intel

Xeon processor E7 series and

Intel QuickPath Interconnect

technology, the NEC Express5800/A1080a takes performance to an entirely new level,

outperforming the previous generation by 200 percent with database workloads.1 The

NEC Express5800/A1080a is also highly energy efficient thanks to NEC’s innovative

green power cooling technology and energy-efficient power supplies. This section

outlines in brief the merits of large-scale virtualization with the NEC Express

5800/A1080a and the scalable and flexible architecture of the NEC

Express5800/A1080a.

Large scale virtualization with the NEC Express5800/A1080a

The NEC Express5800/A1080a has many key design features that are ideal for

consolidation using large-scale virtualization. Below, we mention only a few of the

technological advances that make consolidating entire database, application, and Web

infrastructures onto a single NEC Express5800/A1080a the preferred solution to

growing IT needs.

Featuring the new Intel Xeon E7 series of processor and eight processor sockets, the NEC Express5800/A1080a can expand to as many as 80 cores (160 threads) of CPU power. This means that by using vSphere 5, which has an achievable maximum of 25 vCPUs per core depending on workload, your NEC Express5800/A1080a has

1 http://www.nec.com/global/prod/express/product/scalable/index.html

Figure 1: The NEC Express5800/A1080a.

Page 7: NEC Express5800/A1080a-E Server

7

the potential to run 2,000 vCPUs. With this increased processing capacity, overall VM capacity is expanded and vCPUs are more optimally utilized.

With these large core and thread capacities, ancillary but necessary functions, such as live migration and backup processes, need not interfere with application processing. The NEC Express5800/A1080a and Intel Xeon E7 series processors provide ample resources to ensure VMs are never starved for compute power, even during maintenance activities.

The NEC Express5800/A1080a is ideal not only for Web server and application server virtualization, but also for database server virtualization and other workloads that may require high I/O capabilities. The NEC Express5800/A1080a, designed with 12 x8 and 2 x16 PCI Express 2.0 slots (see Figure 2), can handle large numbers of multiple-port NICs and HBAs, for many I/O connections to external networks or storage devices as well as provide sufficient connection redundancy.

With this high number of PCI-E slots, the NEC Express5800/A1080a provides flexibility for existing datacenter installations with split storage area network (SAN) infrastructure and Ethernet network infrastructure, while at the same time allowing room for newer converged network designs, where the SAN and local area network shares infrastructure, such as cabling and switches. In either case, always ensure that redundant paths to your storage and your network exist.

If using a converged fabric model, adjust and monitor your Quality of Service (QoS) on your switch hardware to ensure that storage paths always have priority over network paths.

Figure 2: The I/O slot configuration in the NEC Express5800/A1080a server.

If your situation calls for it, the NEC Express5800/A1080a has full support for VMware DirectPath I/O. This feature allows VMs direct access to physical NIC and HBA ports without using paravirtualized adapters (See Figure 3). With the large number of I/O slots, DirectPath I/O is a feature you might wish to use for

Page 8: NEC Express5800/A1080a-E Server

8

specialized VMs and workloads. If you opt to use this feature, consult the latest VMware documentation.

Figure 3: Using DirectPath I/O bypasses the hypervisor to provide performance near non-virtualized speeds.

Configure PCI devices to use DirectPath I/O with the vSphere client: connect to the server, click the Configuration tab, click Advanced Settings (Hardware section), then Configure Passthrough (See Figure 4). From this screen, you can select the device for DirectPath I/O. After configuring the PCI device, edit the applicable VM and select the PCI device for use: on the settings for the VM, add the hardware, add the PCI device, and select the appropriate device from the DirectPath I/O host configuration.

Figure 4: Using DirectPath I/O bypasses the hypervisor to provide near non-virtualized speeds.

Page 9: NEC Express5800/A1080a-E Server

9

The NEC Express5800/A1080 also features advanced error-detection technologies. With the Express5800/A1080a Advanced POST, the built-in diagnostics (BID) runs at every boot, detecting system errors automatically. Using the EXPRESSSCOPE Monitor LCD on the front of the Express5800/A1080a, you can view system errors and use the automatic BID to identify and reconfigure failed system hardware.

Scalable box design and architecture

The NEC Express5800/A1080a series is fully customizable to fit the needs of any size

enterprise environment. It utilizes a modular design where the memory and processors

are installed on up to eight internal blade-type modules (called processor memory

modules or PMMs), which can be added and removed for scaling, reconfiguring, or

maintenance (see Figure 5). This design allows for maximum flexibility and scalability,

as your organization may require only two or four sockets now, but in the future may

need the flexibility of expanding to eight sockets without purchasing an entire new

infrastructure.

Figure 5: Processor Memory Module.

The NEC Express5800/A1080a series server can be configured three ways:

as a single server with one to four processors (NEC Express5800/A1080a-S )

as two servers, each with one to four processors (NEC Express5800/A1080a-D)

as a single server with eight processors (NEC Express5800/A1080a-E)

All of these configurations are in a single 7U chassis.

Below, we briefly describe each model configuration. The diagrams use the following

acronyms:

PMMx—processor memory module

IOHx—I/O hub

ICH—I/O control hub

Page 10: NEC Express5800/A1080a-E Server

10

NEC Express5800/A1080a-S

Figure 6: A1080a-S model, a single server, PMMs, and interconnects. (Source: The NEC Express5800/A1080a User Guide.)

The A1080a-S model is a single server, using PMMs 1 through 4. The minimum

processor configuration is one CPU and the maximum processor configuration is four

CPUs. Memory configuration ranges from a minimum of 4 GB (2 x 2 GB) to a maximum

of 1 TB (64 x 16 GB). This server configuration requires that at least PMM 1 be

configured. This configuration is ideal for smaller configurations, such as two-socket

scenarios, but still allows for growth within the NEC infrastructure to additional sockets

later.

NEC Express5800/A1080a-D

Figure 7: A1080a-D model, two servers, PMMs, and interconnects. (Source: The NEC Express5800/A1080a User Guide.)

The A1080a-D model consists of two servers, utilizing PMMs 1 through 4 for Server 1

and PMMs 5 through 8 for Server 2. The maximum processor configuration for each

server is four CPUs. Memory configuration ranges from a minimum of 4 GB (2 x 2 GB)

Page 11: NEC Express5800/A1080a-E Server

11

to a maximum of 1 TB (64 x 16 GB) on each server. This server configuration requires

that at least PMM 1 and PMM 5 be configured. This configuration can be helpful when

you need two distinct servers but want the management capabilities of housing both in

a single box, still allowing room for future growth.

NEC Express5800/A1080a-E

Figure 8: A1080a-E model, one eight-socket server, PMMs, and interconnects. (Source: The NEC Express5800/A1080a User Guide.)

The A1080a-E model is a single server, utilizing PMMs 1 through 8, and takes the server

to the maximum performance configuration. The maximum processor configuration for

this server is eight CPUs, and memory configuration ranges from a minimum of 8 GB (4

x 2 GB) to a maximum of 2 TB (128 x 16 GB) on the server. This server configuration

requires that all PMMs be configured. In this configuration, the I/O interface is

expanded by upgrading the scalable card, which connects PMMs 1 through 4 to PMMs

5 through 8. This configuration gives your server full access to the maximum

configuration capabilities of the Express5800/A1080a and is ideal for heavy workloads

that require multiple large VMs or high numbers of smaller VMs.

For more information on specific hardware design and detailed images describing each

system component and configuration, see the NEC Express5800/A1080a User’s Guide

included on the documentation disk that came with your server.

Note, in this Guide, we focus on the NEC Express5800/A1080a-E.

The Intel Xeon processor E7 family and its features

The NEC Express5800/A1080a-E features the new Intel Xeon processor E7-8800/4800

families of processors. This family is Intel’s newest, based on 32nm Intel process

technology. With new architecture design, the Intel Xeon processor E7 family provides

Page 12: NEC Express5800/A1080a-E Server

12

better processing performance and power savings than the previous Intel Xeon

processor 7500 series. It includes the following features and improvements:

The core count per processor ranges from 6 to 10, which means that with Intel Hyper-Threading Technology enabled, there can be up to 20 logical processors per socket, for a total of 160 logical processors on the NEC Express5800/A1080a-E. This is a 25 percent increase in logical processors over the previous generation of Intel Xeon processors.

The processors support up to 30MB cache, allowing the processor to cache even more data for faster access.

The processors support memory architectures that allow for up to 32GB DIMMs, allowing greater memory support per processor and highly scalable deployments.

To increase power savings, the processors utilize Intel Intelligent Power Technology. This power-saving technology allows individual cores to power down to 0 watts when idle and allows the entire processor to idle at a near-zero level, reducing overall power consumption and energy costs during periods of lower usage.

For more information on the Intel Xeon processor E7 family, see

http://www.intel.com/content/www/us/en/processors/xeon/xeon-processor-e7-

family.html.

Figure 9 shows which processors the different NEC Express5800/A1080a models

support.

Processor NEC Express5800/A1080a model

A1080a-S A1080a-D A1080a-E

Intel Xeon processor E7-4807 (6 cores, 1.86 GHz, 18MB cache)

Supported Supported Not supported

Intel Xeon processor E7-4820 (8 cores, 2.00 GHz, 18MB cache)

Supported Supported Not supported

Intel Xeon processor E7-4830 (8 cores, 2.13 GHz, 24MB cache)

Supported Supported Supported

Intel Xeon processor E7-8830 (8 cores, 2.13 GHz, 24MB cache)

Supported Supported Supported

Intel Xeon processor E7-8850 (10 cores, 2.00 GHz, 24MB cache)

Supported Supported Supported

Intel Xeon processor E7-8870 (10 cores, 2.40 GHz, 30MB cache)

Supported Supported Supported

Intel Xeon processor E7520 (4 cores, 1.86 GHz, 18MB cache)

Supported Supported Not supported

Intel Xeon processor E7540 (6 cores, 2.00 GHz, 18MB cache)

Supported Supported Supported

Intel Xeon processor X7542 (6 cores, 2.66 GHz, 18MB cache, No Hyperthreading)

Supported Supported Supported

Page 13: NEC Express5800/A1080a-E Server

13

Processor NEC Express5800/A1080a model

A1080a-S A1080a-D A1080a-E

Intel Xeon processor L7545 (6 cores, 1.86 GHz, 18MB cache)

Supported Supported Supported

Intel Xeon processor X7550 (8 cores, 2.00 GHz, 18MB cache)

Supported Supported Supported

Intel Xeon processor L7555 (8 cores, 1.86 GHz, 24MB cache)

Supported Supported Supported

Intel Xeon processor X7560 (8 cores, 2.26 GHz, 24MB cache)

Supported Supported Supported

Figure 9: Processors supported by the various NEC Express5800/A1080a models.

For optimized performance on the eight-socket NEC Express5800/A1080a-E server, we

recommend purchasing the processor with the most cores and cache for your ESXi

platform. This is especially true for CPU-intensive application workloads inside your

guests that thrive with many cores, faster frequencies, or more CPU cache; examples

include wide VMs running SAP workloads, specialized virtual desktop infrastructure

(VDI) scenarios with sophisticated end users running graphically intense operations, or

latency sensitive network applications such as Web servers and messaging.

Overview of VMware vSphere 5

VMware has long been an industry leader in data center virtualization and cloud

computing. They provide products and features that are trusted around the world and

are known for their ability to simplify and streamline enterprise virtualization

platforms. Their newest version of their flagship hypervisor product, VMware vSphere

5, builds on years of experience, adds features, and improves on existing features. With

vSphere 5, VMware introduces a new level of simplicity and reliability that makes

vSphere 5 the go-to virtualization platform.

New features in VMware vSphere 5

VMware continues to add more features to vSphere, expanding scalability,

manageability, and infrastructure to accommodate and improve an ever-growing

presence of server virtualization around the globe. Some new features and

improvements in VMware vSphere 5 include the following:

ESXi. To provide a smaller footprint on host servers while maintaining security, VMware has converged its host hypervisor on the ESXi architecture. This thinner hypervisor takes fewer system resources and less time to deploy than the ESX hypervisor. With this change, VMware lets server administrators streamline their server deployments and patch processes.

Virtual hardware Version 8. In vSphere 5, VMware introduces the newest version of virtual hardware, Version 8. This new version provides support for Windows

Page 14: NEC Express5800/A1080a-E Server

14

Aero as well as USB 3.0. It also allows for very large VMs, with up to 1 TB of RAM and 32 vCPUs per VM, which complements the large, scalable NEC platform.

vSphere Storage DRS. This new feature takes the innovative VMware vSphere DRS technology and applies it to storage. vSphere Storage DRS introduces datastore clusters and utilizes them to load balance datastores within a cluster. For more information on DRS and Storage DRS, see the section VMware Storage DRS.

Multi-NIC vMotion. New with VMware vSphere 5, vSphere can use multiple NICs to push vMotion traffic over the vMotion network as quickly as possible, using all available bandwidth on your multiple vMotion NICs. You simply assign multiple NICs to vMotion traffic in vSphere, and need not make any changes on the physical switch.

Higher host and VM configuration maximums. vSphere 5 continues to raise configuration maximums for hosts as well as VMs. vSphere can now support 2 TB of RAM and 512 virtual machines per host. Virtual machines can now be configured with up to 32 vCPUs and 1 TB of RAM, four times more than previous vSphere versions. On a highly scalable system such as the NEC Express5800/A1080-E, these large configuration maximums are ideal.

vSphere Web Client. VMware vSphere 5 now supports the Web-based vSphere Client. With this new access capability, you can log into vSphere through a Web browser at any location.

VMware vCenter Server Appliance. Prior versions of vSphere provided a standalone vCenter Microsoft Windows installer that needed to be installed on a separate Windows-based server. With vSphere 5, you can now deploy a Linux-based virtual appliance in open virtualization format (OVF) format-vCenter Server that is easy to manage and quick to set up, while reducing the resources necessary on a physical vCenter Server installation.

For more information on the new features in VMware vSphere 5, see

http://www.vmware.com/files/pdf/products/vsphere/vmware-what-is-new-

vsphere5.pdf.

Review of existing vSphere features from previous versions

Along with these new features, vSphere of course carries over from earlier versions

many features that are vital to data center operations. These features include the

following:

VMware High Availability (HA)

VMware Fault Tolerance (FT)

VMware vMotion and Storage vMotion

VMware Distributed Resource Scheduler

VMware vCenter and vSphere Client

Throughout this paper, we offer best practices on each of these features. For more

information on VMware vSphere and a complete list of features, see

http://www.vmware.com/products/vsphere/overview.html.

Page 15: NEC Express5800/A1080a-E Server

15

The NEC Express5800/A1080a-E

Management features

The NEC Express5800/A1080a-E features enterprise-level management tools and

resource monitors, all accessible remotely through a Web console. These include

temperature, power, and cooling monitors for all internal hardware. You can also

access the server BIOS and OS remotely through a remote KVM built into the Web

console. Combined with the NEC Test and Diagnosis On Linux (TeDoLi) program, you

can fully monitor, manage, and troubleshoot every component of the NEC

Express5800/A1080a-E. For more information on the TeDoLi diagnostic tool, consult

the User’s Guide for your server.2 The following section provides a brief overview of

the Web console and its features.

Web console features

The NEC Express5800/A1080a-E is equipped with a management Web console for

remote access to the server and management features from any system connected to

the same network. This feature is divided into three different Web consoles, each with

its own IP address and login credentials: System, Server, and Resource. Each console’s

IP address can be configured using the EXPRESSSCOPE LCD monitor on the front of the

system.

Each Web console contains a help page and a language settings page, where you can

toggle between English and Japanese. The help page provides detailed descriptions of

items, and can be opened and closed as needed. There is also a Refresh button to keep

information up to date as you are configuring items. To disconnect from the Web

console, click the Disconnect link on the left side of the page to end the session.

Figure 10 provides a feature matrix to provide an overview of major maintenance

features.

2 http://www.58support.nec.co.jp/global/download/

Page 16: NEC Express5800/A1080a-E Server

16

Maintenance features System Server Resource

Component Health information on Summary screen? Yes Yes Yes

FW update capability Yes No No

Event Log Yes Yes No

Sensor Readings No Yes No

KVM redirection No Yes No

Virtual LCD (EXPRESSSCOPE) No Yes No

Fault Information Yes Yes No

Service Processor Reset Yes Yes Yes

Other SP maintenance No Yes No

User/Alert Management Yes Yes No

Figure 10: Web console locations for major maintenance features.

System Web console

The System Web console contains features and settings specific to the overall system.

It provides a high-level view of the entire NEC solution.

Figure 11: System Summary page, System Web console.

Server Web console

The Server Web console can be accessed by using the IP configured through

EXPRESSSCOPE or by clicking on the name of the server on the summary page of the

System Web console. This Web console provides server specific settings and monitors

on a more granular level. If your NEC Express5800/A1080a is configured as a NEC

Express5800/A1080a-D (two servers), this Web console lets you view and configure

each one.

Page 17: NEC Express5800/A1080a-E Server

17

Figure 12: Server Summary page, Server Web console.

Resource Web console

The Resource Web console shows the status and health of individual resources within

the NEC Express5800/A1080a apart from the actual server configuration.

Page 18: NEC Express5800/A1080a-E Server

18

Figure 13: Resource Summary page, Resource Web console.

BIOS options and best practices

The NEC Express5800/A1080a utilizes the Aptio Setup Utility by American Megatrends,

Inc. (AMI). This BIOS firmware is based on UEFI and the Intel Platform Innovation

Framework for EFI. For more information on the features and enhancements of AMI

Aptio, see http://www.ami.com/aptio/. This section outlines BIOS considerations on

the NEC Express5800/A1080a, specifically as they relate to VMware vSphere 5.

Depending on the virtualized workload pattern, you may see different performance

results by using different BIOS options. Below, we provide generalized

recommendations as starting points for your specific environment. Each virtualized

deployment and application is different, so we recommend thorough testing in your

test environments with these settings prior to production implementation.

Page 19: NEC Express5800/A1080a-E Server

19

General vSphere recommendations

Web Database VDI Messaging

applications

Hardware Prefetcher Enable Enable Disable Disable Enable

Turbo Boost Enable Enable Enable Enable Disable

Hyper-threading Enable Enable Enable Enable Disable

Figure 14: General recommendations for BIOS options based on virtualized workload profile.

To access the BIOS, press Delete or F2 at the first NEC splash screen during POST

(Figure 15).

Figure 15: Entering the NEC BIOS following the power-on self-test (POST).

For vSphere 5, certain BIOS changes allow for greater performance, or in some cases,

greater power savings. In most cases, we recommend leaving BIOS settings at their

default configuration. Below, we describe various BIOS recommendations.

Basic BIOS considerations

o Be sure your system is running with the latest BIOS update. You can find updates for the NEC Express5800/A1080a-E at http://www.58support.nec.co.jp/global/download/index.html.

o BIOS updates and BMC Firmware updates are released as a set. If you update the BIOS, you should update the BMC Firmware at the same time. Note that when downloading the BMC Firmware, you must accept the end user licensing agreement (EULA) in order to download. On the EULA acceptance page, take note of which BIOS is compatible with the firmware version you are downloading.

o Always test BIOS changes in a non-production environment so that you can fully understand the performance and power ramifications of BIOS setting changes.

Page 20: NEC Express5800/A1080a-E Server

20

Advanced Memory BIOS options

o NUMA. Enables/disables Non Uniform Memory Access (NUMA). The default setting is Enabled. This setting should be retained since vSphere 5 includes a new virtual NUMA feature that can pass the advantages of the systems’ NUMA architecture to guests that have a NUMA aware OS. NUMA architecture splits system memory by dedicating RAM local to each CPU to that specific CPU. Because multiple CPUs cannot address the same RAM at the same time, this architecture allows CPUs to focus on their own sets of RAM and avoids the inevitable waiting period where CPUs take turns addressing the same RAM.

Advanced CPU BIOS options

o Intel VT. Enables/disables the Intel Virtualization Technology feature. The default setting is Enabled. Intel Virtualization Technology is a hardware-assisted virtualization technology that allows systems with Intel processors to utilize hypervisor technologies such as VMware vSphere 5. When running the NEC Express5800/A1080a with VMware vSphere 5, this setting is required; therefore, always leave this feature enabled.

o Hyper-threading. Enables/disables the Intel Hyper-Threading Technology feature. The default setting is Enabled. Intel Hyper-Threading Technology enables processors to run multiple threads on each core. Enabling Intel Hyper-Threading Technology increases processor throughput performance and is recommended when running VMware vSphere 5 on the NEC Express5800/A1080.

o Execute Disable Bit. Enables/disables the Execute Disable Bit feature, a hardware security feature developed by Intel. The default setting is Enabled. With this feature enabled, the processor can categorize sections of memory where code can and cannot be executed. If malicious code tries to execute in an unallowed memory buffer, the processor can disable code execution, preventing the code from damaging the system. For security, always enable this setting.

o Hardware Prefetcher. Enables/disables the Hardware Prefetcher feature. The default setting is Enabled. When this feature is enabled, the processor anticipates what data the current program will need from cache and accesses the data. In most cases, this setting should be left at the default, but in some highly random workloads, you may see a performance boost by disabling the hardware prefetcher. As a best practice, test each setting with your specific environment to see the best configuration.

Power related BIOS options

o In most cases, set the BIOS power technology setting for the NEC Express5800/A1080a-E to the “Energy Efficient” option (see Figure 16). If your situation requires fine tuning of CPU BIOS options, you may need to set the power technology setting to Custom.

Page 21: NEC Express5800/A1080a-E Server

21

Figure 16: The Aptio Setup Utility, Advanced CPU configuration; setting the Power Technology to Energy Efficient. o For even better power savings, enable all C-states in the BIOS. This may affect

performance slightly, but it gives ESXi even more control over power management.

For a complete list of BIOS options, see the NEC Express5800/A1080a User’s Guide

included on the documentation disk that arrived with your Express5800/A1080a-E.

Internal storage controller and disk setup

The NEC Express5800 Scalable HA Server family of servers comes equipped with one or

two LSI MegaRAID 6Gbps SAS controllers and six hot-plug slots per controller for 2.5”

HDDs, with support for RAID 0, 1, 5, 6, 10, and 50. LSI MegaRAID SAS controllers

feature a management utility, the MegaRAID WebBIOS. With this utility, you can

configure your system’s internal disk storage, check the integrity of the RAID array,

analyze controller health, configure disks as hot spares, and rebuild failed disks. The

following section gives a brief overview of the MegaRAID WebBIOS as well as some

best practices for RAID configurations and disk management.

The MegaRAID WebBIOS can be accessed during post on the NEC Express5800/A1080a-

E by pressing CTRL+H or by using the boot override feature in the system BIOS.

Page 22: NEC Express5800/A1080a-E Server

22

Figure 17. Entering the WebBIOS utility during POST by pressing CTRL+H.

Once the MegaRAID WebBIOS has finished loading, you will be presented with an

Adapter Selection screen. When multiple adapters are present in the system, you can

access each individual controller on this initial screen.

Figure 18. Adapter Selection screen.

Once the controller is selected, you will have access to the MegaRAID WebBIOS home

screen specific to the selected LSI storage controller.

Page 23: NEC Express5800/A1080a-E Server

23

Figure 19. MegaRAID BIOS Config Utility Virtual Configuration.

The right side of the screen lists the disk groups, virtual drives, and physical disks

assigned to the controller. Each disk’s slot number, disk type, capacity, configuration,

and health status is included. The left side of the screen lists the available actions for

the storage controller. Below, we provide a brief overview of each option.

Controller Selection. This option returns you to the Controller Selection page.

Controller Properties. This view presents the current storage controller’s basic information and options.

Scan Devices. This option is used to scan for new disks and configurations when new disks are added to the system. This can be performed while the WebBIOS is running, or when new disks have yet to appear on the storage controller disk lists.

Virtual Drives. This option allows you to initialize virtual drives, check their consistency and properties, and set the boot drive to be presented in the system BIOS when the controller is set as a boot option.

Drives. Here you can view physical drive and disk group properties, as well as rebuild failed drives.

Configuration Wizard. Use this option to set up or delete RAID configurations.

Physical/Logical View. Toggles the right window disk status screen between a physical disk view and a logical disk view. The default window is physical view when a virtual disk is not configured. Logical view is the default views when virtual disk has been configured.

Events. Lists system events for the storage controller. MegaRAID does not support this feature.

Page 24: NEC Express5800/A1080a-E Server

24

Exit. Exits the MegaRAID WebBIOS and reboots the server to continue the boot process.

Internal storage controller and disk best practices

Before using the WebBIOS to configure a virtual disk or adjust controller options, follow

these best practices and considerations.

We recommend configuring a two disk RAID 1 mirrored virtual drive for the installation of ESXi.

Place critical VM virtual disk files on an external SAN.

Use remaining available internal disks for logging, management, and utility files. Configure these with redundant RAID options (1, 5, 6, 10, 50). Choose the appropriate RAID configuration based on your storage capacity needs.

Do not place VM virtual disk files on the ESXi boot image volume.

Never use RAID 0. Though this option allows you to use the full capacity of each physical disk and can offer some performance benefits, it provides no data protection against a disk failure.

When configuring a virtual disk, always use disks of the same capacity and spindle speed.

Before using a virtual disk, perform a slow initialization on the virtual disk. Though this process can take a considerable amount of time to complete, it ensures that the disk is fully initialized before any system load is transferred to the disk. When a fast initialization is used, the storage controller will initialize the disk in the background and can cause performance issues if the disk is used while this process is still running.

When selecting the RAID type to be used in your environment, make sure you understand the function of each type and select the one that provides your desired level of data protection.

Periodically run a consistency check on your virtual drives. This checks the accuracy of redundant data across a virtual disk and rebuilds redundant data as needed.

Always have a healthy charged battery backup attached to each storage controller. This battery keeps data stored in cache that hasn’t been written to disk yet safe for usually 72 hours (depending on the cache size) in the event of a power outage. It is usually recommended to replace this battery every three years.

When a failed disk is being rebuilt, allow the virtual disk to fully recover before resuming the use of the disk array.

NEC prohibits resetting the storage controller back to factory defaults.

NEC also advises against changing the default settings of the following controller: Cluster Mode, Adapter BIOS, Coercion Mode, PDF Interval, Alarm Control, Cache Flush Interval, Spinup Drive Count, Spinup Delay, and Stop On Error.

The following list indicates NEC recommended settings when configuring a RAID disk:

o Strip Size – 64KB o Access Policy – RW (Read/Write)

Page 25: NEC Express5800/A1080a-E Server

25

o WrtThru for BAD BBU – Checked (When WriteBack policy is specified) o IO Policy – Direct o Disk Cache Policy – Disable o Disable BGI – No

For more information on the settings, refer to the NEC Express5800/A1080a User’s

Guide.

VMware General considerations VMware vSphere 5 contains many configuration settings and features that allow it to

adapt to nearly every x86 virtualized environment. Understanding these features and

their components is key to creating a successful virtualized enterprise infrastructure.

The following sections outline primary features and configuration settings that should

be considered when setting up a VMware vSphere 5 environment with the NEC

Express5800/A1080a-E server. Understanding that most datacenter hardware is not

homogeneous, we include not only vSphere 5 best practices for the NEC

Express5800/A1080a server series, but also best practices that will help to merge the

NEC platform with your existing infrastructure.

vSphere 5 and the NEC 5800/A1080a-E: General

considerations

Prior to deploying your NEC Express5800/A1080a-E server with VMware vSphere 5,

evaluate your system’s hardware for compatibility restrictions and hardware errors.

Follow these general hardware considerations:

After installing vSphere 5, test your new system for hardware errors for 72 hours. This should be long enough to detect any faulty hardware before moving the server into full production.

Check all subcomponents of your system such as network cards, Fibre Channel controllers, and other PCI devices for vSphere 5 compatibility. The slightest hardware incompatibility can have a great impact on performance, and can cause vSphere 5 or your server to not perform optimally.

Following these suggestions can help avoid any issues once the server is in production.

It also minimizes the risk of lost data or performance problems, because the issues will

be discovered before any production workload is transferred to the server.

ESXi: General considerations

The installation of ESXi on the NEC Express5800/A1080a-E is performed as past installations of ESXi, via local CD media or common deployment tools. No non-default configuration changes are necessary to install ESXi on the NEC Express5800/A1080a-E.

Page 26: NEC Express5800/A1080a-E Server

26

The NEC Express5800/A1080a-E comes equipped with hardware, performance, and tools to run many VMs. As a precaution, however, perform thorough research and documentation on CPU utilization, memory capacities, etc. when planning your ESXi infrastructure with the NEC Express5800/A1080a server family.

In vSphere 5, VMware introduces virtual hardware version 8 – a leap forward in VM maximum capacity configurations. Paired with VMware’s new virtual hardware version 8, the large CPU and memory capacities of the NEC Express5800/A1080a-E make the possibility of the “monster VM” a reality. VMs can now have up to 32 vCPUs assigned and up to 1 TB of RAM.

Figure 20: New vSphere 5 virtual machine maximums allow up to 32 vCPUs and 1 TB of RAM.

If you use the new virtual hardware version 8 in ESXi 5, be aware that it is not completely compatible with previous versions of ESX/ESXi. If your environment consists of a mix of ESX/ESXi 4.x and ESXi 5, DRS and vMotion will not be able to move the new version 8 VM onto any hosts running ESX/ESXi 4.x. For more information on vMotion and DRS, see the section VMware vMotion, VMware Storage vMotion, VMware High Availability, and VMware Fault Tolerance.

Guest OS: General considerations

The virtual machine, or the guest, is the heart of your new consolidated NEC and VMware virtualized approach. The VM contains your users’ applications, and must be tuned, patched, and running smoothly for your enterprise. With the configuration possibilities on the NEC Express5800/A1080a-E server, the number of VMs on one host can run in the dozens or even hundreds. With these VM densities, it is vital to follow standard procedures, policies, and best practices. We offer the following recommendations for Guest OS considerations:

Ensure your VM operating systems are supported by VMware. A list of supported operating systems is located at http://partnerweb.vmware.com/GOSIG/home.html.

Disable all screen savers, window animations, and X servers (Linux). Leaving these enabled will use added CPU resources and can affect not only the performance of the VM, but the overall performance of the cluster and VMware features like DRS and DPM.

Page 27: NEC Express5800/A1080a-E Server

27

Make sure that all VMs have updated versions of VMware Tools installed on their guest operating systems. Drivers critical to VMware features, like ballooning, are installed with VMware Tools.

Always update VMware Tools on your VMs as you upgrade your installations of ESXi.

VMware recommends you use either VMware Tools time synchronization or another utility for keeping time if you require time synchronization across your virtualized environment. Never use two different utilities for time keeping simultaneously.

When trying to capture VM specific performance data, keep in mind that the VM’s timing may be slightly off, especially when resources are fully utilized.

VMware recommends using their performance tools either in the vSphere Client or

VMware Tools, or even esxtop.

CPU best practices

vSphere 5 and the NEC 5800/A1080a-E: CPU best

practices

Be assured that the CPU family included with the NEC Express5800/A1080a-E, the Intel

Xeon processor E7 family, is compatible and prepared for high-density virtualization

with vSphere 5. In order to seamlessly migrate workloads to your new NEC Express

environment, you may need to consider the CPU family and generation of hosts in your

existing infrastructure, for vSphere features such as vMotion and VMware Fault

Tolerance to function properly with your NEC Express5800/A1080a-E.

For vMotion, a vital feature for moving running workloads between hosts, Enhanced

VMotion Compatibility (EVC) plays an important role in mixed generation clusters. EVC

is a feature of vSphere that allows vMotion across processor families. When

implementing your NEC Express5800/A1080a-E into existing VMware clusters, consider

the following:

Check the EVC mode of the existing cluster and ensure that it is set to allow vMotion migrations between the older hosts and the new NEC Express5800/A1080a-E.

If you plan to use VMware Fault Tolerance, ensure other CPUs in your clusters are compatible with VMware Fault Tolerance.

ESXi: CPU best practices

Generally, setting CPU affinity on your VMware guests is not a best practice as the scheduler adapts to changing conditions. In some rare cases, however, binding VMware guests to specific sockets with CPU affinity benefits performance. For example, a wide guest requiring the same number of cores as are found on a

Page 28: NEC Express5800/A1080a-E Server

28

processor socket would, if bound to the socket, have sole access to local CPU cache and memory, resulting in lower latencies. As a second example, assigning CPU affinity would provide the means to crudely divide a busy VMware host into sections where a small number of guests run on a specific socket somewhat isolated from the remaining guests.

Monitor host CPU utilization often. Keeping CPU utilization at around 80 percent is generally acceptable. If CPU utilization averages approximately 90 percent, consider using fewer VMs on the host, allowing DRS to function more freely, or expanding the current server infrastructure.

The default vSphere 5 CPU power management policy is Balanced. While this setting generally doesn’t impact overall performance for most workloads, consider changing this setting to High Performance if your environment will be running high-performance workloads.

ESXi automatically uses hyper-threading when enabled on a server (the default setting for the Express5800/A1080a-E). Hyper-threading will boost performance by enabling the use of an additional thread per core. However, take special caution and test thoroughly when combining hyper-threading and CPU affinity, if your environment requires the use of CPU affinity.

For best performance, allow ESXi to set the CPU/MMU Virtualization automatically for each VM. Only in rare situations does this setting need to be adjusted.

Guest OS: CPU best practices

New to vSphere 5, VMs can now use the underlying host NUMA architecture to improve performance in conjunction with operating systems that utilize NUMA topology. This feature is called Virtual NUMA (vNUMA). When using vNUMA-enabled VMs, make sure the NUMA architecture across all hosts in the cluster is the same. A VM will set its vNUMA based on the NUMA architecture on the host it is on when vNUMA is first enabled. If the VM moves to a host with a different architecture, the VM’s vNUMA settings may be different from the original host, causing a loss in performance.

When setting the number of cores and vCPUs on VMs, base your choices on the current host’s NUMA topology. If there are four cores per NUMA node, set your vCPU count to a multiple of four. When the number of cores per vCPU socket is set to a number other than the default of one, choose a number based on the NUMA node size on the host.

Memory best practices

VMware ESXi is known for its ability to overcommit system memory to increase the

density of virtual machines on a server. It uses the following features to achieve even

memory resource management across VMs: Page Sharing, Ballooning, Memory

Compression, Swap to Host Cache, and Regular Swapping. More detailed information

on how these features work can be found on page 26 in VMware’s Performance Best

Practices for VMware vSphere 5.0.

Page 29: NEC Express5800/A1080a-E Server

29

While these features allow for significant memory overcommit, you should still

carefully allocate memory resources to VMs on each host. Giving VMs more RAM than

they need can reduce the number of VMs that a host can run, while giving them too

little memory can affect the performance of workloads in the virtual environment.

Also, the number of VMs running on a single server increases the overall memory

overhead required for running the VMs themselves. As a general rule, configure and

power on only the VMs you need; doing so will minimize overall memory overhead and

resource waste.

Storage best practices

vSphere 5 and the NEC 5800/A1080a-E: Storage

best practices

Below, we outline certain recommendations for designing your storage infrastructure.

The storage infrastructure in your environment can have an enormous impact on

overall performance. Many workloads are strongly affected by high I/O latency and

poor storage networking and even small misconfigurations can cause lower-than-

expected performance. By fully utilizing features offered by both VMware as well as

your storage vendor, you can create a storage infrastructure that works in unison with

your servers to achieve the best performance for your environment.

When planning your storage infrastructure, consider the section on vMotion and Storage vMotion in the section VMware vMotion, VMware Storage vMotion, VMware High Availability, and VMware Fault Tolerance. The overall performance of these features depends greatly on the layout of your external storage.

Select storage subsystems that are compatible with VMware vStorage APIs for Array Integration (VAAI). VAAI features improve overall storage infrastructure scalability by offloading storage computation from the server to the storage itself. For example, on SANs with VAAI, this frees host resources during cloning operations by cloning at the storage level.

When using iSCSI and NFS interfaces, keep the same number of Ethernet connections on either end of the storage infrastructure. This ensures that there are no bottlenecks in the storage network infrastructure caused by routing multiple connections through fewer connections.

Figure 21.The NEC Storage M-Series: NEC M100.

Page 30: NEC Express5800/A1080a-E Server

30

When using Fibre Channel, ensure that speeds remain consistent across network connections.

On Fibre Channel HBAs, set the appropriate queue depth.

For best I/O load performance, divide the work across all storage processors and HBAs connected to the storage. If the I/O workload is heavy, separate the storage processors and assign them to separate hosts. This will allow the storage processors to better handle storage traffic.

For more information on general storage best practices, see VMware’s white paper

Performance Best Practices for VMware vSphere 5.

ESXi: Storage best practices

When implementing your NEC and VMware solution, consider the following for ESXi:

As we mentioned above, use storage hardware with VMware vStorage APIs for Array Integration (VAAI). ESXi automatically detects VAAI storage and immediately takes advantage of VAAI’s features without additional setup.

When creating virtual disks in ESXi, set their type to eager-zeroed and their mode to independent persistent for the best performance. For more detail on the types of virtual disks and virtual disk modes, see page 30 in VMware’s Performance Best Practices for VMware vSphere 5.0.

If your environment requires more granular management of storage resources, use the limits and share features of the virtual disk properties on individual VMs. By setting a specific limit to each virtual disk, you can cap the amount of storage each VM can claim. Likewise, you can use shares to divide storage resources based on overall priorities. The higher the number of shares a VM has, the higher priority it has over VMs with lower share.

When using multipathing, assign the Most Recently Used path policy to active/passive storage and assign the Fixed policy to active/active storage for the best performance. Also, check with your storage vendor for multipathing drivers and utilities for VMware. Applying these vendor specific drivers to ESXi can greatly improve your storage array’s performance.

Guest OS: Storage best practices

EXSi 5 has three different default virtual storage adapters available for assignment to guests depending on the virtual hardware version and the guest OS: BusLogic Parallel, LSI Logic Parallel, and LSI Logic SAS. ESXi also has a paravirtualized SCSI adapter called VMware Paravirtual (PVSCSI), a high-performing virtual SCSI adapter best suited for environments with demanding I/O workloads. Below, we discuss some recommendations regarding SCSI adapter choices and other guest storage options.

Check VMware’s approved support list for operating systems that can use the PVSCSI adapter for both data disks and boot disks. This list is available at http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010398

Page 31: NEC Express5800/A1080a-E Server

31

Use the PVSCSI adapter, especially for virtual disks containing volatile workload data, such as OLTP database data. The PVSCSI adapter uses less CPU and potentially increases application throughput in comparison to the three other adapters.

Figure 22: Selecting the VMware Paravirtual SCSI controller in the vSphere Client.

Queue depth settings can greatly affect disk performance. Adjust these settings based on the vendor recommendations for the storage driver that is in use on the VM.

Networking best practices

vSphere 5 and the NEC 5800/A1080a-E:

Networking best practices

Prior to attempting any network performance tuning, consider the following physical

network adapter recommendations:

For a system with the power and performance of the NEC Express5800/A1080a-E, use enterprise class 10Gb networking hardware.

Consider the existing network infrastructure before installing your server. Be sure your network cables and switch can support the speed of your server NICs. If your NICs run at 10 Gb/s, your switch should be set to handle 10Gb connections and your network cables should be compatible with 10Gb NICs.

For VMware vSphere 5, all NICs used on the NEC Express5800/A1080a-E should support the following features:

o Checksum offload o Jumbo frames (JF) o TCP segmentation offload (TSO) o Ability to handle high-memory DMA o Ability to handle multiple Scatter Gather elements per Tx frame o Large receive offload (LRO)

Page 32: NEC Express5800/A1080a-E Server

32

PCIe slots where NICs are located should also be considered when using 10Gb NICs. Single port 10Gb NICs should use PCIe x8 slots, of which the NEC Express5800/A1080a-E has 12. Dual-port 10Gb NICs should use PCIe x16 or higher; the NEC Express5800/A1080a-E has 2 of these slots available.

ESXi: Networking best practices

Below, we provide a few brief best practices on ESXi networking.

Separate VMkernel and virtual machine traffic onto two different vSwitches with different network adapters attached. This prevents issues with VMkernel traffic when virtual machine traffic is saturating the available network bandwidth, or vice versa when using vMotion. However, make sure all VMs that need to communicate with each other are on the same vSwitch to prevent unnecessary traffic over physical network cables and switches.

If possible, attach multiple physical NICs to a single vSwitch. Doing so provides passive failover as well as load balancing across two or more NICs. These NIC teams can help improve overall performance across the network infrastructure.

For more ESXi best practices when configuring networking, see VMware’s Performance

Best Practices for VMware vSphere 5.0.

Guest OS: Networking best practices

For best performance, select the VMXNET3 paravirtualized network adapter when selecting the virtual NIC for your VMs. Keep in mind that the VMs must be hardware version 7 or 8 and the guest OS must be able to support the VMXNET3 adapter in order to use it. However, making this choice means these VMs will not be able to vMotion to ESX/ESXi versions earlier than 4.0.

If your VMs require the use of jumbo frames or TCP Segmentation Offload (TSO), you must use VMXNET3, Enhanced VMXNET, or E1000 virtual network adapters as they are the only options that support this feature.

VMware vCenter Server and resource management

tools: Best practices

VMware vCenter Server is VMware’s flagship management platform for managing your

vSphere clusters—your administrator’s main interface to your NEC

Express5800/A1080a-E hosts. vCenter Server provides unparalleled virtual

management features and brings simplicity and extensive capabilities to managing

thousands of virtual machines running on your NEC infrastructure. By following the

best practices outlined in this section, you can fully utilize the features in vCenter Sever

and vSphere Client.

Page 33: NEC Express5800/A1080a-E Server

33

Figure 23: VMware vCenter Server.

VMware vCenter Server

For each vCenter Server system in a given environment, be aware of the maximum configuration guidelines. Below, we briefly outline a few of the important configuration maximums for your vCenter Server infrastructure and design. For more details, see the Configuration Maximums document from VMware at http://www.vmware.com/pdf/vsphere5/r50/vsphere-50-configuration-maximums.pdf.

o ESXi hosts per cluster: 32 o Virtual machines per cluster: 3,000 o Virtual machines per host: 512 o Resource pools per cluster: 1,600

The number of VMs, hosts, and vSphere clients connected simultaneously to a single vCenter Server system directly affects its performance. Closely following these guidelines will greatly improve the performance and stability of a vCenter Server system.

System resources on a vCenter Server system directly affect the server’s performance. Be sure your vCenter Server system has enough CPU, memory, and storage to perform the tasks required in your environment. The minimum system requirements for a vCenter server are available at http://pubs.vmware.com/vsphere-50/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-50-installation-setup-guide.pdf. Below, we list several important minimum system requirements:

o Processor: 2.0 or greater GHz two-core Intel 64 or AMD 64 processor

Page 34: NEC Express5800/A1080a-E Server

34

o RAM: 4 GB o Disk: 4 GB (Note: We recommend at least 10 GB for temporary files, updates,

patches, and flexibility.)

vCenter Server uses a database to store inventory configuration information, performance data, alarms, tasks, and events. If using vCenter Server for Windows, a version of SQL Server Express is included; if using the new vCenter Server Linux appliance, a version of embedded DB2 is included. For a small environment with only a few hosts, we recommend running these embedded or express database options locally. However, in a typical enterprise environment with large numbers of ESXi hosts and VMs to manage, we recommend using an enterprise-level database management platform. vCenter Server supports IBM DB2, Oracle, and Microsoft SQL Server. This database instance may exist prior to your vCenter Server installation, and may likely be on a different system than vCenter Server. For database configuration options, refer to the Performance Best Practices for VMware vSphere 5 white paper.

If you choose to use a remote database, minimize the number of network hops between a vCenter Server system and the vCenter Server database. Doing so reduces the latency of tasks run on the vCenter Server.

VMware vSphere Client

The number of vSphere connections to a vCenter Server can greatly affect vCenter performance. Always disconnect vSphere clients from vCenter Server when the client is not in use. By following this practice, you can greatly improve the speed of the vSphere Client user interface when connected to the vCenter Server.

Many instances of vSphere Client can be open on a single server. However, to avoid a lack of available system resources, monitor system resource usage closely and close connections as necessary.

When searching for items in the vSphere Client inventory, do not navigate to the item through the inventory panel. Instead, use the inventory search feature, which uses fewer system resources.

VMware also has a vSphere Web Client that can be run through a Web browser. This client can be installed on the same system as vCenter Server, but VMware generally recommends separating the two. VMware also recommends that the browser window logged into the vSphere Web Client be closed daily to ensure that the Web Client does not consume more memory than necessary.

For more information, see

http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.0.pdf.

Recommendations for VM resource management

VMware allows you to fully manage the allocation of CPU and memory at the VM level.

These settings allow you to fine-tune your specific workloads and VM resource

allocations, which can help improve VM performance. Also, these features allow you to

Page 35: NEC Express5800/A1080a-E Server

35

ensure that critical VMs have enough resources available to them to perform at their

peak.

These settings are as follows: Reservations, Limits, and Shares. Reservations are a tool

to “reserve” the resource, such as CPU or memory. Use a Reservation to define the

minimum acceptable amount of the resource. A Limit is just that, a limit on the

applicable resource. Shares are tools that you can use to prioritize VM resource

allocation when the machine is under stress and there is resource contention.

Figure 24: Assigning Shares, Reservations, and Limits in the vSphere Client.

Below, we present best practices for VM resource management settings to achieve

optimum performance.

Only use Reservations, Share, or Limits if you find your environment requires them.

Use resource pools to isolate certain hardware resources, or even certain subsets of hardware resources, to your user groups or customers. If you wish to fully isolate the resource pool, set the type to Fixed and use both Reservations and Limits.

To ensure that a VM gets the minimum CPU and memory it needs, set a reservation. Remember to set this as the minimum desired CPU and memory; setting too many high reservations on VMs can limit the number of VMs that can be powered on in the environment.

When using reservations in a DRS cluster, allow for extra CPU and memory capacity. If the reserved resources in a cluster are close to capacity, DRS will be unable to fully balance cluster load.

When using multiple VMs running the same multi-tier application, group them into a single resource pool. By doing this, you can then allocate resources directly for the multi-tier application.

Page 36: NEC Express5800/A1080a-E Server

36

VMware vMotion, VMware Storage vMotion, VMware High Availability,

and VMware Fault Tolerance

VMware’s high availability and fault tolerance cluster technologies minimize VM

downtime during system maintenance and crisis situations. The four main technologies

VMware has developed to minimize downtime are VMware vMotion, VMware Storage

vMotion, VMware High Availability, and VMware Fault Tolerance. The following

sections describe the best practices when using these technologies in your

NEC/VMware environment.

VMware vMotion

VMware vMotion allows a user to move powered-on VMs to other hosts within a

cluster with little to no interference with the workload running on that VM. This

transfer process is completely transparent to the end user on the VM. This feature

allows data center administrators to perform maintenance tasks on hosts without

having to power down the VMs running on the hosts and enables other VMware

“movement” features, such as DRS and DPM.

Figure 25: VMware vMotion.

Below, we list some best practices for your NEC infrastructure as it relates to VMware

vMotion.

If you are running ESXi 5 on your NEC Express5800/A1080a-E servers, ensure the virtual hardware version on the VMs is upgraded to version 8.

Page 37: NEC Express5800/A1080a-E Server

37

For fastest vMotion performance, dedicate 10Gb network interfaces and multiple VMK vNICs to vMotion, using the new multiple vMotion NIC capabilities included in ESXi 5. By adding more network bandwidth, you can increase vMotion performance.

Because vMotion does consume some CPU during migrations, leave some CPU capacity on each host in a cluster that will be involved in vMotion. This allows you to improve the performance and speed of vMotion operations.

VMware Storage vMotion

VMware Storage vMotion is similar to vMotion, but applies to the virtual disk files of a

VM. If a storage array requires maintenance or disk replacements, a data center

administrator can use Storage vMotion to migrate VM files from the desired storage

array to a healthy storage array within the cluster. Again, this transfer is completely

transparent to the end user and does not require the VM to be powered off.

Figure 26: VMware Storage vMotion.

Below, we list some best practices for using Storage vMotion in your NEC

infrastructure.

Use VAAI-capable storage arrays in your cluster environment to improve the performance of Storage vMotion.

For best Storage vMotion performance, wait for periods when the source and destination arrays are at low utilization.

Page 38: NEC Express5800/A1080a-E Server

38

Be sure that bandwidth is available across the storage and host networks during a planned Storage vMotion.

VMware High Availability

VMware HA shifts VM workloads to other hosts in a cluster when the original host

becomes isolated or goes down. Though it requires a full restart of VMs, VMware HA

settings ensure that a cluster can run all VMs at the same performance level before and

after a failed host is removed from a cluster.

Figure 27: VMware High Availability.

VMware High Availability, when enabled in a cluster, selects a master host from the

available hosts to handle HA-specific tasks. These tasks include monitoring the health

of the cluster and initiating the failover of VMs in the event of a host failure or host

isolation. These failures can occur often because of network isolation, storage isolation,

or server hardware failure. Below, we review some best practices as they relate to

VMware HA.

If the physical switches that connect your servers support the PortFast option, enable this feature. Spanning Tree processes can take some time, which may cause a network isolation event. Enabling PortFast will help to prevent this.

Use network teaming and configure redundant physical and virtual switches for your management networks.

Use DNS resolution for your hosts.

For Host Isolation Response, leave the default setting of “Shut down” for your VMs. This will allow your VMs to gracefully perform an OS shutdown prior to transferring hosts and rebooting on those new hosts.

Page 39: NEC Express5800/A1080a-E Server

39

VMware Fault Tolerance

Like VMware HA, VMware Fault Tolerance (FT) is designed to protect your cluster from

unexpected host failures. However, rather than restarting your VMs on other hosts, FT

creates a live shadow secondary VM on another host that is kept in lockstep to the

primary VM, even when the VM is running and active. In the event of a failover, the VM

workloads running on the primary VMs attached to the failed host are shifted to the

secondary VMs on other hosts. FT lets you avoid VM downtimes and still protect your

cluster from failures, but does so at the cost of overall VM capacity.

Figure 28: VMware Fault Tolerance.

Below, we list some best practices for using FT in your NEC infrastructure.

Enabling FT on a VM begins the creation of the secondary VM on a separate host. Due to the overhead incurred by enabling FT on a VM, avoid frequently disabling and re-enabling FT on a VM.

Carefully consider which VMs you want to use FT and enable it on only VMs that need it. Enabling FT will disable other VM features and can decrease performance. When FT is enabled, the secondary VM that is created uses the same amount of system resources as the primary VM, thus affecting host and cluster capacity.

Networking best practices for FT:

o When possible, dedicate separate NICs for vMotion and FT logging. The communication between the primary VM and the secondary VM can require a lot of bandwidth. By separating FT logging traffic and vMotion traffic, you prevent either task from taking bandwidth from the other.

o Use at least a 1Gb NIC for FT logging. o To help avoid FT logging network traffic bottlenecks, divide the FT-enabled

VMs across several hosts, reducing the amount of FT logging traffic originating from each host.

o Avoid placing more than four FT-enabled VMs on each host in a cluster to help control the amount of FT logging traffic on a given host.

Page 40: NEC Express5800/A1080a-E Server

40

Ensure that the primary and secondary VMs are on servers with matching processor models and speeds, so that the performance of a primary VM and its secondary VM match.

Set hosts involved in FT to the same power management settings in each server’s BIOS.

Set CPU reservations on the primary VM; this reservation will be applied to the secondary VM also. This ensures that the secondary VM gets the proper number of CPU cycles to stay synched with the primary VM.

VMware Distributed Resource Scheduler and VMware Distributed Power

Management

VMware employs several “movement” technologies to shift workloads around in

VMware clusters to adapt to ever-changing workload demand, maintenance, and

power issues. These technologies are VMware Distributed Resource Scheduler (DRS),

its sub-component Storage DRS, and VMware Distributed Power Management (DPM).

VMware designed DRS and DPM to allow administrators to fully automate power and

resource management across an entire data center. Below, we discuss some best

practices regarding DRS and DPM.

VMware Distributed Resource Scheduler

VMware DRS is a vSphere feature that dynamically load-balances cluster resources.

Using DRS can help increase the overall efficiency of your cluster, by adjusting the

placement of VMs as well as resource allocation within resource pools.

Figure 29: VMware DRS.

Below, we list some recommended best practices for VMware DRS.

Where possible, all NEC Express5800 or other hosts in your DRS-enabled clusters should have identical processors and memory. Identical hardware lets DRS more easily balance VM workloads across a cluster. Second, DRS utilizes vMotion to

Page 41: NEC Express5800/A1080a-E Server

41

migrate VMs to other hosts when balancing workloads across a cluster. To do this, vMotion requires the source and destination hosts to have compatible processors.

Always ensure that the settings for vMotion and vMotion network traffic are identical on all hosts with the DRS cluster, because vMotion is needed for several DRS functions. Without identical vMotion settings across the cluster, DRS will be less able to properly balance VM workloads.

Ensure users or applications shut down idle VMs. DRS performance can increase if idle VMs are powered off.

Set as many of the VMs DRS settings as possible to automatic. VMs with DRS disabled cannot participate in DRS load-balancing tasks.

Create DRS affinity rules, if applicable. This is helpful when you want a specific VM to run on a specific host. Other DRS rules are available based on business needs.

For more information on DRS best practices, including sizing and performance tuning,

see the DRS section in the VMware white paper Performance Best Practices for

VMware vSphere 5.

VMware Storage DRS

VMware Storage Distributed Resource Scheduler is a new feature in vSphere 5. Storage

DRS combines datastores into a new object in vSphere 5, datastore clusters. Storage

DRS uses these datastore clusters to monitor I/O load across the datastore cluster and

then shift VM virtual disks via Storage DRS to achieve I/O load balancing. Thus, Storage

DRS attempts to eliminate bottlenecks on storage performance as they occur.

Figure 30: VMware Storage DRS.

Page 42: NEC Express5800/A1080a-E Server

42

Below, we provide recommendations to help improve your storage performance while

using Storage DRS.

When creating datastore clusters, do not configure them with more than 9,000 virtual disks and 32 datastores.

Avoid mixing datastores with different host interface protocols, RAID levels, or performance capabilities.

Configure your datastore cluster with as many datastores as possible to achieve the best I/O balance and performance with Storage DRS.

Monitor datastore I/O latency during peak hours. You should consider adding more datastores to a datastore cluster or reducing the workload on a datastore if the majority of the datastores in the datastore cluster are close to their I/O latency thresholds.

If you add datastores, ensure the newly added datastores access a different set of physical disks.

Monitor the underlying disk space of each LUN when using thin-provisioned LUNs in a datastore cluster. If too many thin-provisioned LUNs run out of disk space at the time, Storage DRS may not be able to successfully balance the I/O of the datastore cluster. This can also cause too many Storage vMotion tasks to occur, which is likely to affect the performance of Storage DRS.

VMware Distributed Power Management

VMware Distributive Power Management (DPM) is a feature used in vSphere clusters

to help conserve overall power usage during periods of low cluster utilization. DPM

works by migrating VMs onto fewer hosts when there is little activity across the cluster

and placing the evacuated hosts in a powered-down standby mode.

Figure 31: VMware Distributed Power Management.

Page 43: NEC Express5800/A1080a-E Server

43

Below, we offer some best-practice recommendations for DPM.

Use host power management policies in conjunction with DPM to achieve the best power savings. Using both simultaneously in a clustered environment is always better than using only one.

Set DPM on each host to automatic; allow DPM to adjust the cluster automatically based on its recommendations. When set to manual, vCenter presents the most preferable DPM action to administrator, so you might see less power savings.

For more mission-critical VM workloads, protect your host and its VMs from powering off or migrating by disabling DPM on the host and setting VM/host affinity rules. This will keep those specific VMs on the desired host hardware.

Adjust the DPM Threshold based on your particular environment. DPM uses historical usage data to predict demand across the cluster at any given time. It then determines when hosts can be powered down and how many hosts need to continue running to cover periods of unexpected peaks in usage. The aggressiveness of DPM can be adjusted by changing the DPM Threshold, found in the Cluster settings menu. The default setting is 3, medium aggressiveness.

Monitor DPM usage on HA clusters. When DPM is enabled on an HA cluster, DPM always adjusts to the HA settings of the cluster to avoid breaking the HA threshold. This may mean that excess idle servers are left on when the cluster usage is low.

vCenter Update Manager

To simplify the update, patch, and upgrade process in an VMware environment,

VMware developed the vCenter Update Manager. Using the vCenter Update Manager,

you can manage and deploy updates, patches, and upgrades for ESX/ESXi hosts, virtual

machine hardware, and VMware Tools installations. Below, we outline some guidelines

for using vCenter Update Manager.

The vCenter Update Manager server and the vCenter Server can exist on the same machine. However, the two servers should be separated if the number of virtual machines exceeds 1,000 or the number of hosts exceeds 100.

The vCenter Update Manager database and the vCenter Server database can both exist on the same machine. However, the two databases should be separated if the number of virtual machines exceeds 300 or the number of hosts exceeds 30.

Always use separate physical disks for the Update Manager database and the Update Manager patch store.

Ensure that your Update Manager server has 2 GB or more of RAM. The Update Manager must be able to cache in memory patch files that are used often.

Before attempting to patch VMware Tools using Update Manager, power on all target virtual machines. This decreases the overall latency of the task by avoiding the need for Update Manager to power on each VM before upgrading.

When upgrading the virtual machine hardware versions, power off all target VMs. This will decrease the overall latency by avoiding the need for Update Manager to power off each VM before upgrading. Keep in mind, though, that VMware Tools may need to be updated in order to upgrade the virtual machine hardware version.

Page 44: NEC Express5800/A1080a-E Server

44

For more information on the Update Manager guidelines above, see the VMware white

paper Performance Best Practices for VMware vSphere 5. For help when sizing your

VMware vCenter Update Manager patch store and database, VMware has created a

sizing tool that can be downloaded from http://pubs.vmware.com/vsphere-

50/topic/com.vmware.ICbase/PDF/vsphere-update-manager-50-sizing-estimator.xls.

Summary

In this guide, we have outlined many of the best practices for the NEC

Express5800/A1080a-E running VMware vSphere 5. Following these practices will help

your company reap the enormous potential benefits of this solution.