vcpfaq

57
Virtual Machine Maximums Table 1 contains configuration maximums related to virtual machines. SCSI controllers per virtual machine 4 Devices per SCSI controller 15 Devices per virtual machine (Windows) 60 Devices per virtual machine (Linux) 60 Size of SCSI disk 2TB Number of virtual CPUs per virtual machine 4 Size of RAM per virtual machine 16384MB Number of NICs per virtual machine 4 Number of IDE devices per virtual machine 4 Number of floppy devices per virtual machine 2 Number of parallel ports per virtual machine 2 Number of serial ports per virtual machine 2 Size of a virtual machine swap file 16384MB Number of virtual PCI devices: NICs, SCSI controllers, audio devices (VMware Server only), and video cards (exactly one is present in every virtual machine). 6 Number of remote consoles to a virtual machine 10 Storage Maximums Table 2 contains configuration maximums related to ESX Server host storage. Maximum Block size (MB) 8 Raw Device Mapping size (TB) 2 Simultaneous power ons of virtual machines on different hosts against a single VMFS volume (measured in number of hosts). 32 Number of hosts per virtual cluster 32 Number of volumes configured per server 256 Number of extents per volume 32 VMFS-2 Volume size 2TB x number ofextents1 File size (block size=1 MB) 456GB File size (block size=8 MB) 2TB File size (block size=64MB) 27TB File size (block size=256MB) 64TB Number of files per volume 256 + (64 x number of extents) VMFS‐3

Upload: peddin

Post on 28-Jan-2015

109 views

Category:

Documents


1 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Vcpfaq

Virtual Machine MaximumsTable 1 contains configuration maximums related to virtual machines.

SCSI controllers per virtual machine 4Devices per SCSI controller 15Devices per virtual machine (Windows) 60Devices per virtual machine (Linux) 60Size of SCSI disk 2TBNumber of virtual CPUs per virtual machine 4Size of RAM per virtual machine 16384MBNumber of NICs per virtual machine 4Number of IDE devices per virtual machine 4Number of floppy devices per virtual machine 2Number of parallel ports per virtual machine 2Number of serial ports per virtual machine 2Size of a virtual machine swap file 16384MBNumber of virtual PCI devices: NICs, SCSI controllers, audio devices(VMware Server only), and video cards (exactly one is present in everyvirtual machine). 6Number of remote consoles to a virtual machine 10

Storage MaximumsTable 2 contains configuration maximums related to ESX Server host storage.

MaximumBlock size (MB) 8Raw Device Mapping size (TB) 2Simultaneous power ons of virtual machines on different hosts against asingle VMFS volume (measured in number of hosts).

32Number of hosts per virtual cluster 32Number of volumes configured per server 256Number of extents per volume 32

VMFS-2Volume size 2TB x number ofextents1

File size (block size=1 MB) 456GBFile size (block size=8 MB) 2TBFile size (block size=64MB) 27TBFile size (block size=256MB) 64TBNumber of files per volume 256 + (64 x number of extents)

VMFS‐3

Volume size (block size = 1MB) ~16TB‐4GB2Volume size (block size = 2MB) ~32TB‐8GBVolume size (block size = 4MB) ~64TB‐16GBVolume size (block size = 8MB) 64TBFile size (block size=1MB) 256GBFile size (block size=8MB) 2TBNumber of files per directory unlimitedNumber of directories per volume unlimitedNumber of files per volume unlimited

Page 2: Vcpfaq

Fibre ChannelLUNs per server 256SCSI controllers per server 16Devices per SCSI controller 16Number of paths to a LUN 32LUNs concurrently opened by all virtual machines 256LUN ID 255

Storage Maximums (Continued)NFSLUNs per server 256SCSI controllers per server 2LUNs concurrently opened by all virtual machines 256

Hardware & software iSCSILUNs per server 256SCSI controllers per server 2

1 Minimum = 100MB2 ~ denotes an approximate value.

Compute Maximumscontains configuration maximums related to ESX Server host compute resources.

Maximum

Number of virtual CPUs per server 128Number of cores per server 32Number of (hyper threaded) logical processors per server 32Number of virtual CPUs per core 8

Memory MaximumsContains configuration maximums related to ESX Server host memory.Maximum

Size of RAM per server 64GBRAM allocated to service console 800MB

Networking MaximumsContains configuration maximums related to ESX Server host networking.

Physical NICsNumber of e100 NICs 26Number of e1000 NICs 32Number of Broadcom NICs 20

Advanced, physical traitsNumber of port groups 512Number of NICs in a team 32Number of Ethernet ports 32

Page 3: Vcpfaq

Virtual NICs/switchesNumber of virtual NICs per virtual switch 1016Number of virtual switches 127

Virtual Center MaximumsContains configuration maximums related to Virtual Center.

Number of virtual machines (for management server scalability) 1500Number of hosts per DRS cluster 32Number of hosts per HA cluster 16Number of hosts per Virtual Center server 100

VMware Infrastructure Introduction

VMware Infrastructure is a full infrastructure virtualization suite that provides comprehensive (complete) virtualization, management, resource optimization, application availability, and operational automation capabilities in an integrated offering. VMware Infrastructure virtualizes and aggregates (collective) the underlying (original, basic) physical hardware resources across multiple systems and provides pools of virtual resources to datacenter in the virtual environment.

In addition, VMware Infrastructure brings about a set of distributed services thatEnables fine-grain, policy-driven resource allocation, high availability, and consolidated backup of the entire virtual datacenter. These distributed services enable an IT organization to establish and meet their production Service Level Agreements with their customers in a cost effective manner.

VMware Infrastructure includes the following components shown in Figure 1‐1:

VMware ESX Server. A robust, production-proven virtualization layer run on physical servers that abstracts processor, memory, storage, and networking resources into multiple virtual machines.

Page 4: Vcpfaq

Virtual Center Management Server (Virtual Center Server). The central point forConfiguring, provisioning, and managing virtualized IT environments.

Virtual Infrastructure Client (VI Client). An interface that allows users to connectRemotely to the Virtual Center Server or individual ESX Servers from any Windows PC.

Virtual Infrastructure Web Access (VI Web Access). A Web interface that allowsVirtual machine management and access to remote consoles.

VMware Virtual Machine File System (VMFS). A high-performance cluster fileSystem for ESX Server virtual machines.VMware Virtual Symmetric Multi-Processing (SMP). Feature that enables a singlevirtual machine to use multiple physical processors simultaneously.

VMware VMotion . Feature that enables the live migration of running virtual machines from one physical server to another with zero down time, continuous service availability, and complete transaction integrity.

VMware HA . Feature that provides easy-to-use, cost-effective high availability for applications running in virtual machines. In the event of server failure, affected virtualMachines are automatically restarted on other production servers that have spare capacity.

VMware Distributed Resource Scheduler (DRS) . Feature that allocates and balancescomputing capacity dynamically across collections of hardware resources for virtualmachines.

VMware Consolidated Backup (Consolidated Backup) . Feature that provides an Easy-to-use, centralized facility for agent-free backup of virtual machines. It simplifies backup administration and reduces the load on ESX Servers.

VMware Infrastructure SDK . Feature that provides a standard interface for VMwareand third-party solutions to access the VMware Infrastructure.

ClusterA number of similarly configured x86 servers can be grouped together with connectionsto the same network and storage subsystems to provide an aggregate(total) set of resources in the virtual environment, called a cluster.

Storage Networks and ArraysFiber Channel SAN arrays, iSCSI SAN arrays, and NAS arrays are widely used storage technologies supported by VMware Infrastructure to meet different datacenter storage needs. Sharing the storage arrays between (by connecting them to) groups of servers via storage area networks allows aggregation of the storage resources and provides more flexibility in provisioning them to virtual machines.

Management ServerThe Virtual Center Management Server provides a convenient single point of control to the datacenter. It runs on top Windows 2003 Server to provide many necessary datacenter services such as access control, performance monitoring, and configuration. It unifies the resources from the individual computing servers to be shared among virtual machines in the entire datacenter. It accomplishes this by managing the assignment of virtual machines to the computing servers and the assignment of resources to the virtual machines within a given computing server based on the policies set by the system administrator.

Virtual Datacenter Architecture

Page 5: Vcpfaq

VMware Infrastructure virtualizes the entire IT infrastructure including servers, storage, and networks. It aggregates these heterogeneous resources and presents a simple and uniform set of elements in the virtual environment. With VMware Infrastructure, IT resources can be managed like a shared utility and dynamically provisioned to different business units and projects without worrying about the underlying hardware differences and limitations.

Resources are provisioned to virtual machines based on the policies set by the system administrator who owns the resources. The policies can reserve a set of resources for a particular virtual machine to guarantee its performance. The policies can also prioritize and set a variable portion of the total resources to each virtual machine. A virtual machine will be prevented from being powered-on (to consume resources) if doing so would violate the resource allocation policies.

Hosts, Clusters, and Resource PoolsHosts, clusters, and resources pools provide flexible and dynamic ways to organize theaggregated computing and memory resources in the virtual environment and link them back to the underlying physical resources.

A cluster acts and can be managed much like a host. It represents the aggregate computing and memory resources of a group of physical x86 servers sharing the same network and storage arrays. For example, if the group contains eight servers, each server has four dual‐core CPUs running at 4 gigahertz each and 32 gigabytes of memory. The cluster will then have 256 gigahertz of computing power and 256 gigabytes of memory available for the running virtual machines assigned to it.

Resource pools are partitions of computing and memory resources from a single host or a cluster. Any resource pool can be partitioned into smaller resource pools to further divide and assign resources to different groups or for different purposes. In other words, resource pools can be hierarchical and nested.

VMware VMotion

VMware VMotion, DRS, and HA are distributed services that enable efficient and automated resource management and high virtual machine availability.

Virtual machines run on and consume resources from ESX Server. VMotion enables the migration of running virtual machines from one physical server to another without service interruption, This allows virtual machines to move from a heavily loaded server to a lightly loaded one. The effect is a more efficient assignment of resources. With VMotion, resources can be dynamically reallocated to virtual machines across physical servers.

VMware DRS

VMware DRS aids in resource control and management capability in the virtual datacenter. A cluster can be viewed as an aggregation of the computing and memory resources of the underlying physical hosts put together in a single pool. Virtual machines can be assigned to that pool. DRS monitors the workload of the running virtual machines and the resource utilization of the hosts to assign resources.

Using VMotion and an intelligent resource scheduler, VMware DRS automates the task of assigning virtual machines to servers within the cluster to use the computing and memory resources of that server. DRS does the calculation and automates the pairing. If a new physical server is made available, DRS automatically redistribute the virtual machines using VMotion to balance the workloads. If a physical server must be taken down for any reason, DRS automatically reassigns its virtual machines to other servers.

VMware HAVMware HA offers a simple and low cost high availability alternative to application clustering. It enables quick restart of virtual machines on a different physical server within a cluster automatically if the hosting

Page 6: Vcpfaq

server fails. All applications within the virtual machines enjoy the high availability benefit, not just one (through application clustering).

HA monitors all physical hosts in a cluster and detects host failures. An agent placed on each physical host maintains a heartbeat with the other hosts in the resource pool, and loss of a heartbeat initiates the process of restarting all affected virtual machines on other hosts. HA ensures that sufficient resources are available in the cluster at all times to restart virtual machines on different physical hosts in the event of host failure.

Network ArchitectureA virtual switch works like a layer 2 physical switch. Each server has its own virtual switches. On one side of the virtual switch are port groups that connect to virtual machines. On the other side are uplink connections to physical Ethernet adapters on the server where the virtual switch resides. Virtual machines connect to the outside world through the physical Ethernet adapters that are connected to the virtual switch uplinks.

A virtual switch can connect its uplinks to more than one physical Ethernet adapter to enable NIC teaming. With NIC teaming, two or more physical adapters can be used to share the traffic load or provide passive failover in the event of a physical adapter hardware failure or a network outage

Port group is a unique concept in the virtual environment. A port group is a mechanism for setting policies that govern the network connected to it. A vSwitch can have multiple port groups. Instead of connecting to a particular port on the vSwitch, a virtual machine connects its vNIC to a port group. All virtual machines that connect to the same port group belong to the same network inside the virtual environment even if they are on different physical servers.

Port groups can be configured to enforce a number of policies that provide enhancednetworking security, network segmentation, better performance, higher availability,and traffic management:

Layer 2 security options . Enforces what vNICs in a virtual machine can do by controlling promiscuous mode, MAC address change, or forged transmits.

VLAN support . Allows virtual networks to join a physical VLANs or support QOS policies.

Traffic shaping . Defines average bandwidth, peak bandwidth, and burst size.

These are policies that can be set to improve traffic management.

NIC teaming . Sets the NIC teaming policies for an individual port group or network to share traffic load or provide failover in case of hardware failure.

Storage Architecture

The VMware Infrastructure storage architecture consists of layers of abstraction that hide and manage the complexity and differences among physical storage subsystems.

To the applications and guest operating systems inside each virtual machine, the storage subsystem is a simple virtual Bus Logic or LSI SCSI host bus adapter connected to one or more virtual SCSI.The virtual SCSI disks are provisioned from datastore elements in the datacenter. A datastore is like a storage appliance that serves up storage space for many virtual machines across multiple physical hosts.

Page 7: Vcpfaq

The datastore provides a simple model to allocate storage space to the individual virtual machines without exposing them to the complexity of the variety of physical storage technologies available, such as Fibre Channel SAN, iSCSI SAN, direct attached storage, and NAS.A virtual machine is stored as a set of files in a directory in the datastore. A virtual disk inside each virtual machine is one or more files in the directory. As a result, you can operate on a virtual disk (copy, move, back up, and so on) just like a file. New virtual disks can be .hot‐added. to a virtual machine without powering it down. In that case, a virtual disk file (.vmdk) is created in VMFS to provide new storage for the hot-added virtual disk or an existing virtual disk file is associated with a virtual machine.

VMFS is a clustered file system that leverages shared storage to allow multiple physical hosts to read and write to the same storage simultaneously. VMFS provides on-disk locking to ensure that the same virtual machine is not powered on by multiple serversat the same time. If a physical host fails, the on-disk lock for each virtual machine isreleased so that virtual machines can be restarted on other physical hosts.

VMFS also features enterprise-class crash consistency and recovery mechanisms, such as distributed journaling, a crash consistent virtual machine I/O path, and machine state snapshots. These mechanisms can aid quick root-ause and recovery from virtual machine, physical host, and storage subsystem failures.

VMFS also supports raw device mapping (RDM). RDM provides a mechanism for a virtual machine to have direct access to a LUN on the physical storage subsystem (Fibre Channel or iSCSI only). RDM is useful for supporting two typical types of applications:

SAN snapshot or other layered applications that run in the virtual machines. RDM better enables scalable backup offloading systems using features inherent to the SAN.

Any use of Microsoft Clustering Services (MSCS) that spans physical hosts:Virtual-to-virtual clusters as well as physical‐to‐virtual clusters. Cluster data and quorum disks should be configured as RDMs rather than as files on a shared VMFS.

VMware Consolidated Backup

VMware Infrastructure-s storage architecture enables a simple virtual machine backup solution: VMware Consolidated Backup. Consolidated Backup provides a centralized facility for LAN-free backup of virtual machines.

Consolidated Backup works in conjunction with a third-party backup agent residing on a separate backup proxy server (not on the server running ESX Server) but does not require an agent inside the virtual machines.

The third-party backup agent manages the backup schedule. It starts Consolidated Backup when it is time to do a back up. When started, Consolidated Backup runs a set of pre-backup scripts to quiesce the virtual disks to take their snapshots. It then runs a set of post-thaw scripts to restore the virtual machine back to normal operation. At the same time, it mounts the disk snapshot to the backup proxy server. Finally, the third-party backup agent backs up the files on the mounted snapshot to its backup targets. By taking snapshots of the virtual disks and backing them up through a separate backup proxy server, Consolidated Backup provides a simple, less intrusive, and low-overhead backup solution for the virtual environment.

VirtualCenter Management Server

The VirtualCenter Management Server components are user access control, core services, distributed services, and various interfaces.

Page 8: Vcpfaq

The User Access Control allows the system administrator to create and manage different levels of access to the VirtualCenter for different users.

For example, there might be a user class that manages configuring the physical servers in the datacenter and there might be a different user class that manages only virtual resources within a particular resource pool.

Core Services are basic management services for a virtual datacenter. They include services such as:

VM Provisioning . Guides and automates the provisioning of virtual machines

Host and VM Configuration . Allows the configuration of hosts and virtual Machines

Resources and Virtual Machine Inventory Management . Organizes virtual machines and resources in the virtual environment and facilities their management

Statistics and Logging . Logs and reports on the performance and resource utilization statistics of datacenter elements, such as virtual machines, hosts, and clusters

Alarms and Event Management . Tracks and warns users on potential resource Over-utilization or event conditions.

Task Scheduler . Schedules actions such as VMotion to happen at a given time Distributed Services are solutions that extend VMware Infrastructure-s capabilities to the next level such as VMware DRS, VMware HA, and VMware VMotion. Distributed Services allow the configuration and management of these solutions centrally from VirtualCenter Management Server.

VirtualCenter Server has four key interfaces:

ESX Server management . Interfaces with the VirtualCenter agent to manage eachphysical server in the datacenter.

VMware Infrastructure API . Interfaces with VMware management clients andThird-party solutions.

Database interface . Connects to Oracle or Microsoft SQL Server to store information, such as virtual machine configurations, host configurations, resources and virtual machine inventory, performance statistics, events, alarms, user permissions, and roles.

Active Directory interface . Connects to Active Directory to obtain user accesscontrol information.

Communication Between Virtual Center and ESX Server

The Virtual Center communicates with ESX Server.s host agent through the VMwareInfrastructure API (VI API). When a host is first added to Virtual Center, Virtual Centersends a Virtual Center agent to run on the host. That agent communicates with the hostagent

The Virtual Center agent acts as a mini‐Virtual Center Server to perform the following functions:

Page 9: Vcpfaq

Relays and enforces resource allocation decisions made in Virtual Center, includingthose sent by the DRS engine

Passes virtual machine provisioning and configuration change commands to thehost agent

Passes host configuration change commands to the host agent

Collects performance statistics, alarms, and error conditions from the host agentand sends them to the Virtual Center Management Server

Accessing the Virtual Datacenter

Users can manage the VMware Infrastructure datacenter or access the virtual machineconsole through three different means: the VI Client, Web Access through a Webbrowser, or terminal services (such as Windows Terminal Services or Xterm), Accessing hosts should be done only by physical host administrators in special circumstances. All relevant functionality that can be done on the host can also be done in VirtualCenter Server.

The VI Client accesses Virtual Center through the VMware API. After the user is authenticated, a session starts in Virtual Center, and the user sees the resources and virtual machines that are assigned to the user. For virtual machine console access, the VI Client first gets the virtual machine location from Virtual Center through the VMware API. It then connects to the appropriate host and provides access to the virtual machine console.

Users can also access Virtual Center Management Server through the Web browser by first pointing the browser to an Apache Tomcat Server set up by Virtual Center Management Server. The Apache Tomcat Server mediates the communication between the browser and Virtual Center through the VMware API.

To access the virtual machine consoles through the Web browser, users can make use of the bookmark that is created by VirtualCenter Server. The bookmark first points to the VI Web Access.

VI Web Access resolves the physical location of the virtual machine and redirects the Web browser to the ESX Server where the virtual machine resides.

If the virtual machine is running and the user knows the IP address of the virtual machine, the user can also access the virtual machine console using standard tools, such as Windows Terminal Services or Xterm.

Conclusion

VMware Infrastructure provides a simple architecture in the virtual environment to allow companies to manage computing, storage, and networking resources without worrying about the underlying physical hardware. VI architecture allows enterprises to create and configure their datacenters and reallocate resources to different priorities without the time delay and cost of reconfiguring their physical hardware infrastructure.

With a suite of complementary virtualization and management services, such as VMware VMotion, VMware DRS, VMware HA, and VMware Consolidated Backup, VMware Infrastructure is the only product that provides a complete solution rather than a piecemeal approach to building datacenters in the virtual environment.

Page 10: Vcpfaq

Hardware RequirementsVirtualCenter Server hardware must meet the following requirements:

Processor . 2.0GHz or higher Intel or AMD x86 processor. Processor requirements can be larger if your database is run on the same hardware.

Memory . 2GB RAM minimum. RAM requirements can be larger if your database is run on the same hardware.

Disk storage . 560MB minimum, 2GB recommended. You must have 245MB free on the destination drive for installation of the program, and you must have 315MB free on the drive containing your %temp% directory.

MSDE disk requirements . The demonstration database requires up to 2GB free disk space to decompress the installation archive. However, approximately 1.5GB of these files are deleted after the installation is complete.

Networking . 10/100 Ethernet adapter minimum (Gigabit recommended).

Scalability . A VirtualCenter Server configured with the hardware minimums can support 20 concurrent clients, 50 ESX Server hosts, and over 1000 virtual machines. A dual-processor VirtualCenter Server with 3GB RAM can scale to 50 concurrent client connections, 100 ESX Server hosts, and over 2000 virtual machines.

VirtualCenter Server Software RequirementsThe VirtualCenter Server is supported as a service on the 32‐bit versions of these operating systems:Windows 2000 Server SP4 with Update Rollup 1 (Update Rollup 1 can be downloaded from http://www.microsoft.com/windows2000/server/evaluation/news/bulletins/rollup.mspx)

Windows XP Pro (at any SP level) Windows 2003 (all releases except 64-bit)Virtual enter 2.0 installation is not supported on 64‐bit operating systems. The Virtual enter installer requires Internet Explorer 5.5 or higher in order to run.

VirtualCenter Database Requirements

Virtual enter supports the following database formats:Microsoft SQL Server 2000 (SP 4 only)Oracle 9iR2, 10gR1 (versions 10.1.0.3 and higher only), and 10gR2 Microsoft MSDE (not supported for production environments)Each database requires some configuration adjustments in addition to the basic installation.

Virtual Infrastructure Client Requirements

Virtual Infrastructure Client Hardware RequirementsThe Virtual Infrastructure Client hardware must meet the following requirements:

Processor . 266MHz or higher Intel or AMD x86 processor (500MHz recommended).

Memory . 256MB RAM minimum, 512MB recommended.

Page 11: Vcpfaq

Disk Storage . 150MB free disk space required for basic installation. You must have 55MB free on the destination drive for installation of the program, and you must have 100MB free on the drive containing your %temp% directory.

Networking . 10/100 Ethernet adapter (Gigabit recommended).

Virtual Infrastructure Client Software RequirementsThe Virtual Infrastructure Client is designed for the 32‐bit versions of these operating systems:Windows 2000 Pro SP4Windows 2000 Server SP4Windows XP Pro (at any SP level)Windows 2003 (all releases except 64-bit)The Virtual Infrastructure Client requires the .NET framework 1.1 (included in installation if required).

VirtualCenter VI Web Access RequirementsThe VI Web Access client is designed for these browsers:

Windows . Internet Explorer 6.0 or higher, Netscape Navigator 7.0, Mozilla 1.X, Firefox 1.0.7 and higher.

Linux . Netscape Navigator 7.0 or later, Mozilla 1.x, Firefox 1.0.7 and higher.

License Server RequirementsThis section describes the license server requirements.

License Server Hardware RequirementsThe license server hardware must meet the following requirements:

Processor . 266MHz or higher Intel or AMD x86 processor.

Memory . 256MB RAM minimum, 512MB recommended.

Disk Storage . 25MB free disk space required for basic installation.

Networking . 10/100 Ethernet adapter (Gigabit recommended).

VMware recommends that you install the license server on the same machine as your VirtualCenter Server to ensure connectivity.

License Server Software RequirementsThe license server software is supported on the 32‐bit versions of the following operating systems:

Windows 2000 Server SP4Windows XP Pro (at any SP level)Windows 2003 (all releases except 64-bit)

ESX Server Requirements

This section discusses the minimum and maximum hardware configurations supported by ESX Server version 3.

Minimum Server Hardware RequirementsYou need the following hardware and system resources to install and use ESX Server.At least two processors:

1500 MHz Intel Xeon and later, or AMD Opteron (32-it mode) for ESX Server1500 MHz Intel Xeon and later, or AMD Opteron (32-it mode) for Virtual SMP.

Page 12: Vcpfaq

1500 MHz Intel Viiv or AMD A64 x2 dual-core processors1GB RAM minimum.

One or more Ethernet controllers. Supported controllers include: Broadcom NetXtreme 570x Gigabit controllers Intel PRO/100 adaptersFor best performance and security, use separate Ethernet controllers for the service console and the virtual machines

A SCSI adapter, Fibre Channel adapter, or internal RAID controller:Basic SCSI controllers are Adaptec Ultra‐160 and Ultra‐320, LSI Logic Fusion-MPT, and most NCR/Symbios. SCSI controllers.

RAID adapters supported are HP Smart Array, Dell PercRAID (Adaptec RAID and LSI MegaRAID), and IBM (Adaptec) ServeRAID controllers.

Fibre Channel adapters supported are Emulex and QLogic host bus adapters (HBAs).

A SCSI disk, Fibre Channel LUN, or RAID LUN with unpartitioned space. In a minimum configuration, this disk or RAID is shared between the service console and the virtual machines.

For iSCSI, a disk attached to an iSCSI controller, such as the QLogic qla4010.

ESX Server supports installing and booting from the following storage systems:

IDE/ATA disk drives . Installing ESX Server on an IDE/ATA drive or IDE/ATA RAID is supported. However, you should ensure that your specific drive controller is included in the supported hardware.Storage of virtual machines is currently not supported on IDE/ATA drives or RAIDs. Virtual machines must be stored on VMFS partitions configured on a SCSI drive, a SCSI RAID, or a SAN.

SCSI disk drives . SCSI disk drives are supported for installing ESX Server. They can also store virtual machines on VMFS partitions.

Storage area networks (SANs) . SANs are supported for installing ESX Server.They can also store virtual machines on VMFS partitions. For information about pre-installation and configuration tasks and known issues with installing and booting from SANs,

Enhanced Performance RecommendationsThe lists in previous sections suggest a basic ESX Server configuration. In practice, you can use multiple physical disks, which can be SCSI disks, Fibre Channel LUNs, or RAID LUNs.

Here are some recommendations for enhanced performance:

RAM. Having sufficient RAM for all your virtual machines is important to achieving good performance. ESX Server hosts require more RAM than typical servers. An ESX Server host must be equipped with sufficient RAM to run concurrent virtual machines, plus run the service console.

For example, operating four virtual machines with Red Hat Enterprise Linux orWindows XP requires your ESX Server host be equipped with over a gigabyte ofRAM for baseline performance:

1024MB for the virtual machines (256MB minimum per operating system asrecommended by vendors × 4)272MB for the ESX Server service consoleRunning these example virtual machines with a more reasonable 512MB RAM requires the ESX Server host to be equipped with at least 2.2GB RAM.

Page 13: Vcpfaq

2048MB for the virtual machines (512MB × 4) 272MB for the ESX Server service consoleThese calculations do not take into account variable overhead memory for each virtual machine.Dedicated fast Ethernet adapters for virtual machines . Dedicated Gigabit Ethernet cards for virtual machines, such as Intel PRO/1000 adapters, improve throughput to virtual machines with high network traffic.

Disk location . For best performance, all data used by your virtual machines should be on physical disks allocated to virtual machines. These physical disks should be large enough to hold disk images to be used by all the virtual machines.

VMFS3 partitioning . For best performance, use VI Client or VI Web Access to set up your VMFS3 partitions rather than the ESX Server installer. Using VI Client or VI Web Access ensures that the starting sectors of partitions are 64K-aligned, which improves storage performance.

Processors . Faster processors improve ESX Server performance. For certain workloads, larger caches improve ESX Server performance.

Hardware compatibility . To ensure the best possible I/O performance and workload management, VMware ESX Server provides its own drivers for supported devices. Be sure that the devices you plan to use in your server are supported. For additional detail on I/O device compatibility, download the ESX Server I/O Compatibility Guide from the VMware Web site at

Maximum Configuration for ESX ServerThis section describes the hardware maximums for an ESX Server host machine. (Do not confuse this with a list of virtual hardware supported by a virtual machine.)

Storage16 host bus adapters (HBAs) per ESX Server system, with 15 targets per HBA128 logical unit numbers (LUNs) per storage array255 LUNs per ESX Server system32 paths to a LUNMaximum LUN ID: 255

NOTE Although ESX Server supports up to 256 Fibre Channel LUNs for operation, theinstaller supports a maximum of 128 Fibre Channel SAN LUNs. If you have more than128 LUNs, connect them after the installation is complete.

Virtual Machine File System (VMFS)128 VMFS volumes per ESX Server systemMaximum physical extents per VMFS volume:VMFS-3 volumes: 32 physical extentsVMFS-2 volumes: 32 physical extents (VMFS‐2 volumes are read-only for ESX Server 3.0.)2TB per physical extentMaximum size per VMFS volume:VMFS-3 volumes: approximately 64TB, with a maximum of 2TB per physical extent

VMFS-2 volumes: approximately 64TB, with a maximum of 2TB per physical extent (VMFS-2 volumes are read-only for ESX Server 3.0.)

Maximum Sockets Maximum Cores Maximum Threads

Single core With hyperthreading 16 16 32, Without hyperthreading 16 16 16

Dual core With hyperthreading 8 16 32 ,Without hyperthreading 16 32 32

Page 14: Vcpfaq

Virtual ProcessorsA total of 128 virtual processors in all virtual machines per ESX Server host

Memory64GB of RAM per ESX Server system

AdaptersUp to 64 adapters of all types, including storage and network adapters, per systemUp to 20 Gigabit Ethernet or 10/100 Ethernet ports per systemUp to 1024 ports per virtual switch

Virtual Machine SpecificationsEach ESX Server machine can host up to 128 virtual CPUs in virtual machines (and up to 200 registered virtual machines), with the following capabilities and specifications.

Virtual StorageUp to four host bus adapters per virtual machineUp to 15 targets per host bus adapterUp to 60 targets per virtual machine; 256 targets concurrently in all virtual machines per ESX Server host

Virtual SCSI DevicesUp to four virtual SCSI adapters per virtual machine, with up to 15 devices per adapter9TB per virtual disk

Virtual Processor Intel Pentium II or later (dependent on system processor)One, two, or four processors per virtual machine

NOTE All multiprocessor virtual machines require purchased licensing for VMware Virtual SMP for ESX Server. If you plan to create a two-processor virtual machine, your ESX Server machine must have at least two physical processors. For a four-processor virtual machine, your ESX Server machine must have at least four physical processors.

Virtual Chip Set Intel 440BX-based motherboard with NS338 SIO chip

Virtual BIOS Phoenix BIOS 4.0 Release 6

Virtual Machine Memory Up to 16GB per virtual machine

NOTE Windows NT as a guest supports only 3.444GB RAM.

Virtual Adapters Up to six virtual PCI slots per virtual machine

Virtual Ethernet Cards Up to four virtual Ethernet adapters per virtual machine

NOTE Each virtual machine has a total of six virtual PCI slots, one of which is used by the graphics adapter. The total number of virtual adapters, SCSI plus Ethernet, cannot be greater than six.

Virtual Floppy Drives Up to two 1.44MB floppy drives per virtual machine

Virtual CD Up to four drives per virtual machine

Legacy Devices Virtual machines can also make use of the following legacy devices. However, for performance reasons, use of these devices is not recommended.

Page 15: Vcpfaq

Virtual Serial (COM) Ports Up to four serial ports per virtual machine

Virtual Parallel (LPT) Ports Up to three virtual LPT ports per virtual machine

Host-Based License and Server-Based License ModesVirtual Center and ESX Server support two modes of licensing: license server-based and host-based. In host-based licensing mode, the license files are stored on individual ESX Server hosts. In license server-based licensing mode, licenses are stored on a license server, which makes these licenses available to one or more hosts. You can run a mixed environment employing both host-based and license server-based licensing.

Virtual Center and features that require Virtual Center, such as VMotion., must be licensed in license server-based mode. ESX Server-specific features can be licensed in either license server-based or host-based mode.

License Server-Based LicensingLicense: server-based licensing simplifies license management in large, dynamic environments by allowing a VMware license server to administer licenses. With license server-based licensing, you maintain all your Virtual Center Management Server and ESX Server licenses from one console.Server-based licensing is based on industry-standard FlexNet mechanisms. With server-based licensing, a license server manages a license pool, which is a central repository holding your entire licensed entitlement. When a host requires a particular licensed functionality, the license for that entitlement is checked out from the license pool. License keys are released back to the pool when they are no longer being used and are available again to any host.

The advantages of license server-based licensing include:You administer all licensing from a single location. New licenses are allocated and reallocated using any combination of ESX Server form factors. For example, you can use the same 32processor license for sixteen 2-processor hosts, eight 4-processor hosts, four 8-processor hosts, two 16-processor hosts, or any combination totaling 32 processors.

Ongoing license management is simplified by allowing licenses to be assigned and reassigned as needed. Assignment changes as the needs of an environment change, such as when hosts are added or removed, or premium features like VMotion, DRS, or HA are transferred among hosts.

During periods of license server unavailability, VirtualCenter Servers and ESX Server hosts using license server-based licenses are unaffected for a 14-day grace period, relying on cached licensing configurations, even across reboots.

VMware recommends using the license server-based licensing mode for large, changing environments.

Host-Based LicensingThe host-based licensing mode is similar to the licensing mode of previous releases. With host-based licensing, your total entitlement for purchased features is divided on a per-machine basis, divided among separate license files residing on ESX Server hosts and the VirtualCenter Server.With host-based licensing, when someone activates a licensed feature, the feature for that entitlement must reside in the license file on that host. With host-based licensing, you maintain separate license files on each ESX Server host. Distribution of unused licenses is not automatic, and there is no dependence on an external connection for licensing. Host-based license files are placed directly on individual ESX Server hosts and replace the serial numbers used by previous versions of ESX Server version 2.x.

The advantages of host-based licensing include:Host-based files require no license server to be installed for ESX Server host-only environments.

Page 16: Vcpfaq

In a VirtualCenter and license server environment, host-based licensing allows ESX Server host licenses to be modified during periods of license server unavailability. For example, with host-based licensing you can manually move virtual SMP license keys between hosts without a license server connection.

By default, VirtualCenter and ESX Server software is configured to use TCP/IP ports 27000 and 27010 to communicate with the license server. If you did not use the default ports during license server installation, you must update the configuration on each ESX Server host.

If you change the default ports for the license server, log on to the ESX Server host service console and open the ports you want.

To open a specific port in the service console firewall1 Log on to the service console as the root user.2 Execute this command:esxcfg-firewall --openport <portnumber>,tcp

Component – Attempted Action – During Grace Period -After Grace Period Expires

Virtual machine Power on Permitted Not PermittedCreate/delete Permitted PermittedSuspend/resume Permitted Permitted

Configure virtual machine with VI Client PermittedPermitted

ESX Server host Continue operations PermittedPermitted

Power on/power off PermittedPermitted

Configure ESX Server host with VI Client PermittedPermitted

Modify license file for host‐based licensing PermittedPermitted

Virtual Center Remove an ESX Server host from inventory(see next entry)

ServerAdd an ESX Server host to inventory Not Permitted Not

PermittedConnect/reconnect to an ESX Server host inInventory PermittedPermitted

Move a powered‐off virtual machine betweenhosts in inventory (cold migration) Permitted Permitted

Move an ESX Server host among folders ininventory PermittedPermitted

Move an ESX Server host out of aVMotion-DRS-HA cluster (see next entry) PermittedPermitted

Move an ESX Server host intoaVMotion-DRS-HA cluster Not Permitted Not Permitted

Configure VirtualCenter Server with

Page 17: Vcpfaq

VI Client PermittedPermitted

Start VMotion between hosts in inventory PermittedPermitted

Continue load balancing within a DRS cluster PermittedPermitted

Restart virtual machines within the failedhost.s HA cluster Permitted Not

Permitted

Any component Add or remove license keys Not Permitted Not Permitted

Upgrade Not Permitted Not Permitted

ESX Server License Types

When you purchased your VMware Infrastructure software, you purchased one of three available editions, which are:

VMware Infrastructure Starter edition . Provides virtualization for the small business and branch office environments. Its limited production-oriented features include:NAS or local storageDeployable on a server with up to four physical CPUs and up to 8GB physical memory

VMware Infrastructure Standard edition. Provides an enterprise-class virtualized infrastructure suite for any workload. All standard functionality is enabled, and all optional add-on licenses (purchased separately) can be configured with this edition. Includes all production-oriented features, such as: NAS, iSCSI, and SAN usageUp to four-way Virtual SMP

VMware Infrastructure Enterprise edition. Provides an enterprise-class virtualized infrastructure suite for the dynamic data center. It includes all the features of VMware Infrastructure Standard edition, and also includes all optional add-on licenses.

License Type Features for ESX Server MachinesFeature ESX Server Standard ESX Server StarterMaximum number of virtual machines Unlimited

UnlimitedSAN support Yes Not availableiSCSI support Yes Not availableNAS support Yes YesVirtual SMP. support Yes Not availableVMware Consolidated Backup (VCB) Add-on1 Not available

Page 18: Vcpfaq

Components InstalledThe VMware VirtualCenter version 2 default installation includes the following components:VMware VirtualCenter Server . A Windows service to manage ESX Server hosts.Microsoft .NET Framework . Software used by the VirtualCenter Server, DatabaseUpgrade wizard, and the Virtual Infrastructure Client.VMware VI Web Access . A Web application to allow browser-based virtualmachine management.VMware Web Service . A software development kit (SDK) for VMware products.VMware license server . A Windows service allowing all VMware products to belicensed from a central pool and managed from one console.The last three components are optional if you select a custom setup

port@hostname . for example, [email protected]@ip.address . for example, [email protected]

Type a Web Service https port. The default is 443.Type a Web Service http port. The default is 80.Type a VirtualCenter diagnostic port. The default is 8083.Type a VirtualCenter port (the port which VirtualCenter uses to communicatewith the VI Client). The default is 902.Type a VirtualCenter heartbeat port. The default is 902.Select the check box if you want to maintain compatibility with the older SDKWeb interface.

The default ports that VirtualCenter Server uses to listen for connections from the VI Client are ports 80 and 902. VirtualCenter Server also uses port 443 to listen for data transfer from the VI Web Access Client and other SDK clients.

The default port that VirtualCenter uses to send data to the managed hosts is port 902.

Managed hosts also send a regular heartbeat over UDP port 902 to VirtualCenter Server. This port must not be blocked by firewalls.

Installing VMware ESX Server Software

To create a boot partition, use the following settings:Mount Point . /bootFile System . ext3Size (MB) . VMware recommends 100MBAdditional Size Options . Fixed size

To create a swap partition, use the following settings:Mount Point . Not applicable. This drop-down menu is disabled when you select swap for file system.File System . swapSize (MB) . VMware recommends 544MB. For a guide to sizing, see the description of the swap partition in.Additional size options . Fixed size

To create a root partition, use the following settings:

Mount Point . / File System . ext3Size (MB) . VMware recommends at least 2560MB for the root partition, but you can fill the remaining capacity of the drive. For a guide to sizing, see the description of the root partition in.

Page 19: Vcpfaq

Additional size options . Fixed size

(Optional) To create a log partition (recommended), use the following settings:Mount Point . /var/logFile System . ext3

Size (MB) . 500MB is the minimum size, but VMware recommends 2000MB for the log partition

NOTE If your ESX Server host has no network storage and one local disk, you must create two more required partitions on the local disk (for a total of five required partitions):

vmkcore . A vmkcore partition is required to store core dumps for troubleshooting. VMware does not support ESX Server host configurations without a vmkcore partition.

vmfs3 . A vmfs3 partition is required to store your virtual machines. These vmfs and vmkcore partitions are required on a local disk only if the ESX Server host has no network storage.

Locating the Installation Logs

After you install and reboot, log on to the service console to read the installation logs:/root/install.log is a complete log of the installation./root/anaconda-ks.cfg is a kickstart file containing the selected installation.

Creating a Rescue Floppy DiskUse dd, rawwritewin, or rawrite to create a floppy image called bootnet.img. This file is located on the ESX Server CD in the /images directory.

Functional ComponentsThe functional components monitor and manage tasks. The functional components areavailable through a navigation button bar in the VI Client. The options are:

Inventory – A view of all the monitored objects in Virtual Center. Monitored objects include datacenters, resource pools, clusters, networks, data stores, templates, hosts, and virtual machines.

Scheduled tasks – A list of activities and a means to schedule those activities. This is available through Virtual Center Server only.

Events – A list of all the events that occur in the Virtual Center environment. Use the Navigation option to display all the events. Use an object-specific panel to display only the events relative to that object.

Admin – A list of environment-level configuration options. The Admin option provides configuration access to Roles, Sessions, Licenses, Diagnostics, and System Logs. When connected to an ESX Server, only the Roles option appears.

Maps – A visual representation of the status and structure of the VMware Infrastructure environment and the relationships between managed objects. This includes hosts, networks, virtual machines, and data stores. This is available only through Virtual Center Server.Various information lists are generated and tracked by your Virtual InfrastructureClient activity:

Page 20: Vcpfaq

Tasks – These activities are scheduled or initiated manually. Tasks generate event messages that indicate any issues associated with the task.

Events – Messages that report Virtual Infrastructure activity. Event messages are predefined in the product.

Alarms – Specific notifications that occur in response to selected events. Some alarms are defined by product default. Additional alarms can be created and applied to selected inventory objects or all inventory objects.

Logs – Stored reference information related to selected event messages. Logs are predefined in the product. You can configure whether selected logs are generated.

Users and Groups – For VirtualCenter, users and groups are created and maintained through the Windows domain or Active Directory database. Users and groups are registered with VirtualCenter, or created and registered with an ESX Server, through the process that assigns privileges.

Roles – A set of access rights and privileges. There are selected default roles. You can also create roles and assign combinations of privileges to each role.SAN (storage area network) is a specialized high-speed network that connects computer systems, or host servers, to high performance storage subsystems. The SAN components include host bus adapters (HBAs) in the host servers, switches that help route storage traffic, cables, storage processors (SPs), and storage disk arrays.

A SAN topology with at least one switch present on the network forms a SAN fabric.

To transfer traffic from host servers to shared storage, the SAN uses Fibre Channel (FC)protocol that packages SCSI commands into Fibre Channel frames.

In the context of this document, a port is the connection from a device into the SAN.Each node in the SAN, a host, storage device, and fabric component, has one or more ports that connect it to the SAN. Ports can be identified in a number of ways:

WWPN . World Wide Port Name. A globally unique identifier for a port which allows certain applications to access the port. The FC switches discover the WWPN of a device or host and assign a port address to the device.

Port_ID (or port address) . Within the SAN, each port has a unique port ID that serves as the FC address for the port. This enables routing of data through the SAN to that port. The FC switches assign the port ID when the device logs into the fabric. The port ID is valid only while the device is logged on.

When transferring data between the host server and storage, the SAN uses a multipathing technique. Multipathing allows you to have more than one physical path from the ESX Server host to a LUN on a storage array.

If a default path or any component along the path.HBA, cable, switch port, or storage processor. fails, the server selects another of the available paths. The process of detecting a failed path and switching to another is called path failover.

Storage disk arrays can be of the following types:

An active/active disk array, which allows access to the LUNs simultaneously through all the storage processors that are available without significant performance degradation. All the paths are active at all times (unless a path fails).

An active/passive disk array, in which one SP is actively servicing a given LUN. The other SP acts as backup for the LUN and may be actively servicing other LUN I/O. I/O can be sent only to an active

Page 21: Vcpfaq

processor. If the primary storage processor fails, one of the secondary storage processors becomes active, either automatically or through administrator intervention.

To restrict server access to storage arrays not allocated to that server, the SAN uses zoning. Typically, zones are created for each group of servers that access a shared group of storage devices and LUNs. Zones define which HBAs can connect to which SPs. Devices outside a zone are not visible to the devices inside the zone.

Zoning is similar to LUN masking, which is commonly used for permission management. LUN masking is a process that makes a LUN available to some hosts and unavailable to other hosts. Usually, LUN masking is performed at the SP or server level.

Overview of Using ESX Server with SAN

Support for QLogic and Emulex FC HBAs allows an ESX Server system to be connected to a SAN array. You can then use SAN array LUNs to store virtual machine configuration information and application data. Using ESX Server with a SAN improves flexibility, efficiency, and reliability. It also supports centralized managementas well as failover and load balancing technologies.

Benefits of Using ESX Server with SAN

You can store data redundantly and configure multiple FC fabrics eliminating a single point of failure. Your enterprise is not crippled when one datacenter becomes unavailable.

ESX Server systems provide multipathing by default and automatically support it for every virtual machine.

Using ESX Server systems extends failure resistance to the server. When you use SAN storage, all applications can instantly be restarted after host failure.

Using ESX Server with a SAN makes high availability and automatic load balancing affordable for more applications than if dedicated hardware were used to provide standby services.

Because shared central storage is available, building virtual machine clusters that use MSCS becomes possible.

If virtual machines are used as standby systems for existing physical servers, shared storage is essential and a SAN is the best solution.

You can use the VMware VMotion capabilities to migrate virtual machines seamlessly from one host to another.

You can use VMware HA in conjunction with a SAN for a cold-standby solution that guarantees an immediate, automatic response.

You can use VMware DRS to automatically migrate virtual machines from one host to another for load balancing. Because storage is on a SAN array, applications continue running seamlessly.

If you use VMware DRS clusters, you can put an ESX Server host into maintenance mode to have the system migrate all running virtual machines to other ESX Server hosts. You can then perform upgrades or other maintenance operations.

The transportability and encapsulation of VMware virtual machines complements the shared nature of SAN storage. When virtual machines are located on SAN-based storage, it becomes possible to shut down a virtual machine on one server and power it up on another server.or to suspend it on one server and resume

Page 22: Vcpfaq

operation on another server on the same network.in a matter of minutes. This allows you to migrate computing resources while maintaining consistent shared access.

Use CasesUsing ESX Server systems in conjunction (combination) with SAN is particularly effective for the following tasks:

Maintenance with zero downtime. When performing maintenance, you can use VMware DRS or VMotion to migrate virtual machines to other servers. If shared storage is on the SAN, you can perform maintenance without interruptions to the user.

Load balancing. You can use VMotion explicitly or use VMware DRS to migrate virtual machines to other hosts for load balancing. If shared storage is on a SAN, you can perform load balancing without interruption to the user.

Storage consolidation and simplification of storage layout . If you are working with multiple hosts, and each host is running multiple virtual machines, the hosts. storage is no longer sufficient and external storage is needed. Choosing a SAN for external storage results in a simpler system architecture while giving you the other benefits listed in this section. You can start by reserving a large LUN and then allocate portions to virtual machines as needed. LUN reservation and creation from the storage device needs to happen only once.

Disaster recovery . Having all data stored on a SAN can greatly facilitate remote storage of data backups. In addition, you can restart virtual machines on remote ESX Server hosts for recovery if one site is compromised.

Metadata UpdatesA VMFS holds files, directories, symbolic links, RDMs, and so on, and corresponding metadata for these objects. Metadata is accessed each time the attributes of a file are accessed or modified. These operations include, but are not limited to:Creating, growing, or locking a file.Changing a file.s attributes.Powering a virtual machine on or off.

Zoning and ESX ServerZoning provides access control in the SAN topology. Zoning defines which HBAs can connect to which SPs. When a SAN is configured using zoning, the devices outside a zone are not visible to the devices inside the zone.

Zoning has the following effects:Reduces the number of targets and LUNs presented to an ESX Server system.

Controls and isolates paths within a fabric.

Can prevent non-ESX Server systems from seeing a particular storage system, andfrom possibly destroying ESX Server VMFS data.Can be used to separate different environments (for example, a test from a production environment).

When you use zoning, keep in mind the following:ESX Server hosts that use shared storage for failover or load balancing must be in one zone.If you have a very large deployment, you might need to create separate zones for different areas of functionality. For example, you can separate accounting from human resources.! It does not work well to create many small zones of, for example, two hosts with four virtual machines each.

Page 23: Vcpfaq

NOTE Whether a virtual machine can run management software successfully depends on the storage array in question.

NOTE Check with the storage array vendor for zoning best practices.

Choosing Larger or Smaller LUNsDuring ESX Server installation, you are prompted to create partitions for your system.You need to plan how to set up storage for your ESX Server systems before you performinstallation.

You can choose one of these approaches:

Many LUNs with one VMFS volume on each LUNMany LUNs with a single VMFS volume spanning all LUNsYou can have at most one VMFS volume per LUN. You could, however, decide to use one large LUN or multiple small LUNs.

You might want fewer, larger LUNs for the following reasons:

More flexibility to create virtual machines without going back to the SAN administrator for more space.More flexibility for resizing virtual disks, doing snapshots, and so on.Fewer LUNs to identify and manage.

You might want more, smaller LUNs for the following reasons:

Less contention on each VMFS due to locking and SCSI reservation issues.Different applications might need different RAID characteristics.More flexibility (the multipathing policy and disk shares are set per LUN).Use of Microsoft Cluster Service, which requires that each cluster disk resource is in its own LUN.

Choosing Virtual Machine LocationsWhen you.re working on optimizing performance for your virtual machines, storage location is an important factor. There is always a trade-off between expensive storage that offers high performance and high availability and storage with lower cost and lower performance. Storage can be divided into different tiers depending on a number of factors:

High Tier . Offers high performance and high availability. May offer built-in snapshots to facilitate backups and Point-in-Time (PiT) restorations. Supports replication, full SP redundancy, and fibre drives. Uses high-cost spindles.

Mid Tier . Offers mid-range performance, lower availability, some SP redundancy, and SCSI drives. May offer snapshots. Uses medium-cost spindles.

Lower Tier . Offers low performance, little internal storage redundancy. Uses low end SCSI drives or SATA (serial low-cost spindles).

Not all applications need to be on the highest performance, most available storage. at least not throughout their entire life cycle.

Virtual Switch PoliciesYou can apply a set of vSwitch-wide policies by selecting the vSwitch at the top of the

Page 24: Vcpfaq

Ports tab and clicking Edit.

To override any of these settings for a port group, select that port group and click Edit.Any changes to the vSwitch-wide configuration are applied to any of the port groupson that vSwitch except for those configuration options that have been overridden by theport group.

The vSwitch policies consist of:! Layer 2 Security policy! Traffic Shaping policy! Load Balancing and Failover policy

Layer 2 Security PolicyLayer 2 is the data link layer. The three elements of the Layer 2 Security policy are promiscuous mode, MAC address changes, and forged transmits.

In non-promiscuous mode, a guest adapter listens to traffic only on its own MACaddress. In promiscuous mode, it can listen to all the packets. By default, guest adapters are set to non-promiscuous mode.

Promiscuous ModeReject — Placing a guest adapter in promiscuous mode has no effect on which frames are received by the adapter.

Accept — Placing a guest adapter in promiscuous mode causes it to detect all frames passed on the vSwitch that are allowed under the VLAN policy for the port group that the adapter is connected to.

MAC Address ChangesReject — If you set the MAC Address Changes to Reject and the guest operating system changes the MAC address of the adapter to anything other than what is in the .vmx configuration file, all inbound frames will be dropped.

If the Guest OS changes the MAC address back to match the MAC address in the .vmx configuration file, inbound frames will be passed again.

Accept — Changing the MAC address from the Guest OS has the intended effect: frames to the new MAC address are received.

Forged TransmitsReject — Any outbound frame with a source MAC address that is different from the one currently set on the adapter will be dropped.

Accept — No filtering is performed and all outbound frames are passed.

Traffic Shaping PolicyESX Server shapes traffic by establishing parameters for three outbound traffic characteristics: average bandwidth, burst size, and peak bandwidth. You can set

Page 25: Vcpfaq

values for these characteristics through the VI Client, establishing a traffic shaping policy for each uplink adapter.

Average Bandwidth establishes the number of bits per second to allow across the vSwitch averaged over time—the allowed average load.

Burst Size establishes the maximum number of bytes to allow in a burst. If a burst exceeds the burst size parameter, excess packets are queued for later transmission. If the queue is full, the packets are dropped. When you specify values for these two characteristics, you indicate what you expect the vSwitch to handle during normal operation.

Peak Bandwidth is the maximum bandwidth the vSwitch can absorb (take up) without dropping packets. If traffic exceeds the peak bandwidth you establish, excess packets are queued for later transmission after traffic on the connection has returned to the average and there are enough spare cycles to handle the queued packets. If the queue is full, the packets are dropped. Even if you have spare bandwidth because the connection has been idle, the peak bandwidth parameter limits transmission to no more than peak until traffic returns to the allowed average load.

Load Balancing — Specify how to choose an uplink.

Route based on the originating port ID — Choose an uplink based on the virtual port where the traffic entered the virtual switch.

Route based on ip hash — Choose an uplink based on a hash of the source and destination IP addresses of each packet. For non-IP packets, whatever is at those offsets is used to compute the hash.

Route based on source MAC hash — Choose an uplink based on a hash of the source Ethernet.

Use explicit failover order — Always use the highest order uplink from the list of Active adapters which passes failover detection criteria.

Network Failover Detection — Specify the method to use for failoverdetection.Link Status only – Relies solely (only) on the link status provided by the network adapter. This detects failures, such as cable pulls and physical switch power failures, but not configuration errors, such as a physical switch port being blocked by spanning tree or misconfigured to the wrong VLAN or cable pulls on the other side of a physical switch.

Beacon Probing – Sends out and listens for beacon (signal) probes (search) on all NICs in the team and uses this information, in addition to link status, to determine link failure. This detects many of the failures mentioned above that are not detected by link status alone.

11.3. The ESX Server Boot Process

Page 26: Vcpfaq

Several boot loaders are used on Linux systems, such as the Grand Unified boot loader (GRUB) and the Linux Loader (LILO). ESX uses LILO as the boot loader and has system components that expect the presence of LILO as the boot loader, so don't replace LILO with another boot loader, or your server may experience problems. The configuration parameters for the boot loader are contained in /etc/lilo.conf in a human-readable format, but the actual boot loader is stored in a binary format on the boot sector of the default boot disk. This section explains the boot process of ESX Server, as well as how to load the VMkernel and configuration files.

11.3.1. High-Level Boot Process for ESX Server

BIOS is executed on the server.

BIOS launch LILO from the default boot drive.

LILO loads Linux Kernel for the Service Console.

The Service Console launches VMkernel.

MUI Server is started.

Virtual machines can then be launched by VMkernel and managed through MUI.

11.3.2. Detailed Boot Process

As you can see in Figure 11.3, esx is the default boot image that loads automatically after the timeout period. This is actually configured in the /etc/lilo.conf file shown in Figure 11.4 on the line default=esx. The Linux kernel for the Service Console is loaded in the lowest part of memory when it is started and occupies the amount of memory specified during the installation of ESX Server. If you look at the line in the /etc/lilo.conf file shown in Figure 11.4 that reads append="mem=272M cpci=0;*;1:*;2:*;3:*;6:*;". This shows that the Service Console occupies the first 272MB of memory on the server. Figure 11.5 shows a screen shot from the MUI where the Reserved Memory is set in the Options|Startup Profile for the server.

Using HA and DRS TogetherWhen HA performs failover and restarts virtual machines on different hosts, its first priority is the immediate availability of all virtual machines. After the virtual machines have been restarted, those hosts on which they were powered on might be heavily loaded, while other hosts are comparatively lightly loaded. HA uses the CPU and memory reservation to decide failover, while the actual usage might be higher. You can also set up affinity and anti-affinity rules in DRS to distribute virtual machines to help availability of critical resources. For example, you can use an anti-affinity rule to make sure two virtual machines running a critical application never run on the same host. Using HA and DRS together combines’ automatic failover with load balancing. This combination can result in a fast rebalancing of virtual machines after HA has moved virtual machines to different hosts. You can set up affinity and

Page 27: Vcpfaq

anti-affinity rules to start two or more virtual machines preferentially on the same host (affinity) or on different hosts (anti-affinity).

Using DRS Affinity RulesAfter you have created a DRS cluster, you can edit its properties to create rules that specify affinity. You can use these rules to determine that:

DRS should try to keep certain virtual machines together on the same host (forexample, for performance reasons) (affinity).

DRS should try to make sure that certain virtual machines are not together (forexample, for high availability). You might want to guarantee certain virtualmachines are always on different physical hosts. When there’s a problem with one host, you don’t lose both virtual machines (anti-affinity).

Using CPU Affinity to Assign Virtual Machines to Specific Processors

Affinity means that you can restrict the assignment of virtual machines to a subset of the available processors in multiprocessor systems. You do so by specifying an affinity setting for each virtual machine.

VMware Workstation and its virtual computing technology have changed the way most companies look at test environments, and in some cases, even production environments. However VMware Workstation isn’t the only technology that VMware has to offer. The company also offers GSX Server and now ESX Server as well. Let's look at how to best leverage these technologies in your company.

VMware WorkstationVMware Workstation uses virtual machine technology that is designed mostly for the power user. It allows you to run multiple operating systems on a single PC. The operating systems that can run under a VMware virtual machine can include Windows 2000, Windows XP, Windows 2003 Server, Novell Netware, and Linux.

After running through a simple installation of VMware Workstation, you have the ability to configure virtual machines within VMware’s interface. These virtual machines act and look just like a real computer, except they sit inside a window.

In addition, you can network these computers, join and disjoin them from a domain, connect to the Internet and other networks from within them, and simulate whatever environment you choose.

On one of my computers, I've used VMware Workstation to simulate an entire Windows 2003 network with Windows XP clients. With this environment, I can test all of the Windows 2003 product line for compatibility with my network, as well as study for my Windows Server 2003 certification exams. In the past, I had to have at least three systems to be able to accomplish this kind of testing. Now all I need is one computer, an Internet connection, and VMware Workstation.

How does this work?

Page 28: Vcpfaq

VMware works simultaneously with your operating system to allow you to host multiple virtual machines. It does this by allowing you to configure your virtual machines on the VMware virtualization layer. This layer lets you map your hardware to the virtual machine's resources and have virtual machines mapped to your floppy drive, hard drive, CPU, etc. Inside each virtual machine, you can create virtual hard disks and specify how much RAM you want to allocate to each of your virtual machines. Plus, each virtual machine can have its own IP address, even if the system hardware has only one network adapter.In most of the environments I've seen, VMware Workstation is typically used to configure test environments, software development testing, training classrooms, and technical support (to simulate the environment of the user). Now that you've seen how the power user can use VMware, let’s examine how VMware can meet the enterprise server and mainframe needs of your company.

VMware GSX ServerI recently was given the opportunity to evaluate VMware GSX Server, and I was impressed by how well it worked. VMware Workstation supports only one CPU and up to 1 GB of RAM. GSX Server supports 2 CPUs and up to 2 GB of RAM. GSX Server is very similar to Workstation in most other ways, but one of its coolest features is the Remote Console that allows you to remotely manage and access your virtual machine from anywhere on your network. In addition, it's much easier to work with in a high availability configuration.

While VMware Workstation is mostly used by a single user to run multiple instances of operating systems for testing and support purposes, GSX Server is often used for server consolidation by running virtual machines of server operating systems that simply appear to be stand-alone servers to clients on the network.

VMware ESX ServerVMware ESX Server is mainframe-class virtual machine software. This solution is typically used by mainframe data centers and cutting-edge companies. I've also seen this solution used by startup companies. With ESX Server, you can do amazing things such as more extensive server consolidation and virtual machine clustering.

How does it differ from GSX Server and VMware Workstation?

With VMware Workstation and GSX Server, the software sits on top of a host operating system such as Windows or Linux. With ESX Server, the software runs directly on the system's hardware, eliminating the need to install a base OS. In fact, ESX has its own OS. The software basically runs on its own Linux kernel, and Linux is quite beneficial to know when working with the product, although it's not an absolute necessity.

Installation of this product is quite basic. You place the CD in the tray of a system and boot from the CD. It runs you through a typical Linux installation. At the end of the install, you're instructed to go to a separate machine and type in a specific Web address to access the virtual console of ESX Server. From there, you'll configure your system and create virtual machines. With ESX Server, you can have up to 3.6 GB of RAM per virtual machine as well as high performance network cards.

How are companies using ESX Server?What I really like about this product is how companies are using it. For example, I've seen startups simply purchase a SAN and ESX Server and create their whole network using ESX Server. This includes the servers and workstations, which are accessed

Page 29: Vcpfaq

with thin clients.

GSX Server is lightning fast, so you can’t tell the difference between real systems and its virtual systems (if you have powerful hardware running GSX Server). Furthermore, I've seen data centers use ESX Server for hosting client environments and test environments. In the future, I think more companies will take advantage of ESX Server as part of their business strategy.

Final analysisVirtual machine technology is becoming more and more mainstream in today’s IT marketplace. With the current trend toward consolidating servers, VMware is quickly making a place for its products in the server room. Microsoft has even taken an interest in the virtual machine market by buying Virtual PC. However, Microsoft's product line doesn’t quite have the maturity of the VMware product line when it comes to providing enterprise-class server solutions.

VMWARE GSX doesn’t exist anymore. It is replaced by VMWARE Server which is free. VMWARE server is a free virtualization software that run on a Windows Server platform. Good for testing and smaller environments

VMWARE ESX Is the Hypervisor from VMWARE.

It has its own OS, so can not be installed upon Windows. But must be installed on the server itself. It uses it own file system: VMFS.Has really nice features like Vmotion, HA and resource groups.

The virtualization technology for the Enterprise.

VMware ESX Server 2.0

Server Hardware Requirements For information on supported hardware, download the VMware ESX Server Hardware Compatibility Guide from the VMware Web site at www.vmware.com/support/esx2.

Minimum Server Requirements Two up to sixteen processors: Intel® 900MHz Pentium® III Xeon and above 512MB RAM minimum One or more Ethernet controllers. Supported controllers include:

Broadcom® NetXtreme 570x Gigabit controllers Intel PRO/100 adapters Intel PRO/1000 adapters 3Com® 9xx based adapters

Note: If ESX Server has two or more Ethernet controllers, for best performance and security, use separate Ethernet controllers for the service console and the virtual machines.

A SCSI adapter, Fibre Channel adapter or internal RAID controller. The basic SCSI adapters supported are Adaptec®, LSI Logic and most NCR/Symbios SCSI adapters. The RAID adapters supported are HP® Smart Array, Dell® PercRAID (Adaptec RAID and LSI MegaRAID), ServeRAID and Mylex® RAID devices. The Fibre Channel adapters that are supported are Emulex and QLogic adapters.

Page 30: Vcpfaq

The supported SCSI controllers are Adaptec® Ultra-160 and Ultra-320, LSI Logic Fusion-MPT and most NCR/Symbios SCSI controllers. The supported RAID controllers are HP® Smart Array, Dell® PercRAID (Adaptec RAID and LSI MegaRAID), IBM® (Adaptec) ServeRAID and Mylex RAID controllers. The supported Fibre Channel adapters are Emulex and QLogic host-bus adapters (HBAs).

A SCSI disk, Fibre Channel LUN or RAID LUN with unpartitioned space. In a minimum configuration, this disk or RAID is shared between the service console and the virtual machines. Note: To ensure the best possible performance, always use Fibre Channel cards in dedicated mode. We do not recommend sharing Fibre Channel cards between the service console and the virtual machines.

Recommended for Enhanced Performance A second disk controller with one or more drives, dedicated to the virtual machines Sufficient RAM for each virtual machine and the service console Dedicated Ethernet cards for network-sensitive virtual machines

The lists above outline a basic configuration. In practice, you may use multiple physical disks, which may be SCSI disks, Fibre Channel LUNs or RAID LUNs. For best performance, all of the data used by the virtual machines should be on the physical disks allocated to virtual machines. Therefore, these physical disks should be large enough to hold disk images that will be used by all the virtual machines.

Similarly, you should provide enough RAM for all of the virtual machines plus the service console. For background on the service console, see Characteristics of the VMware Service Console. For details on how to calculate the amount of RAM you need, see Sizing Memory on the Server.

Note: To ensure the best possible I/O performance and workload management, VMware ESX Server provides its own drivers for supported devices. Be sure that the devices you plan to use in your server are supported. For additional detail on I/O device compatibility, download the VMware ESX Server I/O Adapter Compatibility Guide from the VMware Web site at www.vmware.com/support/esx2.

ESX Server virtual machines can share a SCSI disk with the service console, but for enhanced disk performance, you can configure the virtual machines to use a SCSI adapter and disk separate from those used by the service console. You should make sure enough free disk space is available to install the guest operating system and applications for each virtual machine on the disk that they will use.

Maximum Physical Machine Specifications Storage

16 host bus adapters per ESX Server system 128 logical unit numbers (LUNs) per storage array 128 LUNs per ESX Server system

VMware File System (VMFS) 128 VMFS volumes per ESX Server system Maximum physical extents per VMFS volume:

VMFS-2 volumes: 32 physical extents VMFS-1 volumes: 1 physical extent

2TB per physical extent Maximum size per VMFS volume:

VMFS-2 volumes: approximately 64TB, with a maximum of 2TB per each physical extent

VMFS-1 volumes: approximately 2 TB

Page 31: Vcpfaq

CPU 16 physical processors per system, with 8 virtual CPUs per processor 80 virtual CPUs in all virtual machines per ESX Server system

Memory 64GB of RAM per ESX Server system Up to 8 swap files, with a maximum file size of 64GB per swap file

Adapters 64 adapters of all types, including storage and network adapters, per system 16 Ethernet ports per system Up to 8 Gigabit Ethernet ports or up to 16 10/100 Ethernet ports per system Up to 32 virtual machines per virtual network device (vmnic or vmnet adapter)

Remote Management Workstation Requirements The remote workstation is a Windows NT 4.0, Windows 2000, Windows XP or Linux system from which you launch the VMware Remote Console and access the VMware Management Interface. The VMware Remote Console runs as a standalone application. The VMware Management Interface uses a Web browser.

Hardware Requirements Standard x86-based computer 266MHz or faster processor 64MB RAM minimum 10MB free disk space required for basic installation

Software — Windows Remote Workstation Windows XP Professional Windows 2000 Professional, Server or Advanced Server Windows NT 4.0 Workstation or Server, Service Pack 6a The VMware Management Interface is designed for these browsers:

Internet Explorer 5.5 or 6.0 (6.0 highly recommended for better performance) Netscape Navigator® 7.0 Mozilla 1.x

Software — Linux Remote Workstation Compatible with standard Linux distributions with glibc version 2 or higher and one of the following:

For single-processor systems: kernel 2.0.32 or higher in the 2.0.x series, kernel in the 2.2.x series or kernel in the 2.4.x series

For multiprocessor systems: kernel in the 2.2.x series or kernel in the 2.4.x series The VMware Management Interface is designed for these browsers:

Netscape Navigator 7.0 Mozilla 1.x

Supported Guest Operating Systems In ESX Server 2.0, VMware Virtual SMP for ESX Server is supported on all of the following guest operating systems marked SMP-capable for dual-virtual CPU configurations.

 Guest Operating System  SMP-Capable

 Windows Server 2003 (Enterprise, Standard and Web Editions)  Yes

 Windows XP Professional (Service Pack 1)  No

 Windows 2000 Server (Service Pack 3 or 4)  Yes

 Windows 2000 Advanced Server (Service Pack 3 or 4)  Yes

 Windows NT 4.0 — Service Pack 6a  No

 Red Hat Linux 7.2  Yes

Page 32: Vcpfaq

 Red Hat Linux 7.3 and 8.0  No

 Red Hat Linux 9.0  Yes

 Red Hat Enterprise Linux (AS) 2.1 and 3.0  Yes

 SuSE Linux 8.2  Yes

 SuSE Linux Enterprise Server (SLES) 8  Yes

 Novell NetWare 6.5 and 5.1 (Patch 6)  No

Virtual Machine Specifications Each ESX Server machine can host up to 80 virtual CPUs in virtual machines (and up to 200 registered virtual machines) on a single ESX Server or up to 8 virtual machines for each CPU, with the following capabilities and specifications.

Virtual Storage 4 host bus adapters per virtual machine 15 targets per host bus adapter 60 targets per virtual machine; 256 targets concurrently in all virtual machines

Virtual Processor Intel Pentium II or later, (dependent on system processor) One or two processors per virtual machine.

Note: If you plan to create a dual-virtual CPU virtual machine, then your ESX Server machine must have at least two physical processors and you must have purchased the VMware Virtual SMP for ESX Server product.

Virtual Chip Set Intel 440BX-based motherboard with NS338 SIO chip

Virtual BIOS PhoenixBIOS 4.0 Release 6

Virtual Memory Up to 3.6GB per virtual machine

Virtual SCSI Devices Up to four virtual SCSI adapters per virtual machine with up to 15 devices per adapter 9TB per virtual disk

Virtual Ethernet Cards Up to four virtual Ethernet adapters per virtual machine

Note: Each virtual machine has a total of 5 virtual PCI slots, therefore the total number of virtual adapters, SCSI plus Ethernet, cannot be greater than 5.

Virtual Floppy Drives Up to two 1.44MB floppy drives per virtual machine

Virtual CD-ROM Up to two drives per virtual machine

Legacy Devices Virtual machines may also make use of the following legacy devices. However, for performance reasons, use of these devices is not recommended.

Virtual Serial (COM) Ports Up to two serial ports per virtual machine

Virtual Parallel (LPT) Ports One LPT Port per virtual machine

VMware Versions Compared

In the past, VMware was just a single product. Now, you will find that there are a wide variety of VMware products to choose from. Because of this, it can be

Page 33: Vcpfaq

confusing which one to choose. This article aims at helping you sort it all out by providing a quick review of all VMware products.

With that, I will now list out the major VMWare products and provide my take on how these products differ from one another.

ESX Server

VMware’s ESX server is at the highest end of features and price of all the VMware server applications. The ESX actually loads right on to “bare-metal” servers. Thus, there is no need to first load an underlying operating system prior to loading VMware ESX. What is unique about ESX is that it comes with its own modified Linux Kernel called VMKernel (based on Red Hat Enterprise Linux). One of the strongest features of VMware ESX server is its performance. When running on similar hardware, you can run twice as many virtual servers on ESX as you can VMware Server. ESX is now sold in a suite of products called VMware Infrastructure.

Overview:Enterprise ClassHigh AvailabilityBetter ManageabilityUsed for enterprise applications like Oracle, SQL Server, clustered servers, and other critical infrastructure serversSupports 4-10+ virtual machines per servers, depending on hardwareSupports up to 32 physical CPU (and 128 virtual) and up to 64GB of RAMLoads directly on hardware with no need to load underlying operating system (because it uses the VMKernel)

VMWare ServerVMware’s Server is a FREE VMware virtualization product built for use in production servers. Unlike ESX, VMware Server still uses the underlying host operating system. With VMware Server, you loose the some of the functionality and performance of the ESX server but don’t have as great of price tag (its free!) For an organization starting with a single VMware server and not anticipating drastic growth, VMware Server is for you. VMware Server’s primary competition is Microsoft’s Virtual Server.

Overview:Used for medium/small business workgroup serversExcellent for software development usesUsed for Intranet, utility, and workgroup application serversSupports 2-4+ virtual machines per servers, depending on hardwareSupports 2-16 CPU and up to 64GB of RAM (but limited by host OS)Runs on top of Linux or Windows Server

Workstation

VMware’s Workstation is for use on a client workstation. For example, say that I want to run both Windows 2003 server and Linux Fedora Core 5 on my desktop workstation, which is running Windows XP. VMware Workstation would be the program I would use to do this. This would allow me the flexibility to run these guest operating systems to test various applications and features. I could also create snapshots of them to capture their configuration at a certain point in time and easily

Page 34: Vcpfaq

duplicate them to create other virtual machines (such as moving them to a VMware Server). Keep in mind that I would have to have a “beefy” workstation with lots of RAM and CPU to keep up with the applications I am also running on my host operating system (Windows XP). Some people ask whether you could run Workstation on a “server” and just not have to use VMware Server. The answer is that, while you can do this, you don’t want to because the server’s applications won’t perform well under load and neither will the multiple operating systems. You might ask why you would buy VMware workstation for $189 when VMware Server is free. Many people would assume that Server is better and costs less. The answer is that these VMware Workstation and VMware Server serve different purposes. VMware Server should be used to run test or production servers. On the other hand, VMware Workstation would be used by testers and developers because of its powerful snapshot manager. This development and testing also applies to IT professionals who want the ability to take multiple snapshots of their virtual systems and be able to jump forward and back in these snapshots. However, you do not want to run production servers in VMware Workstation. In other words, both VMware Workstation and VMware Server have different purposes and should not be looked at as competing products.

Overview:Runs on your desktop operating systemCosts $189Great for testing applications and developing softwareCan create new virtual machines, where VMware Player cannotSupport bridged, host only, or NAT network configurationsAbility to share folders between host OS and virtual machinesAccess to host devices like CD/DVD drives and USB devicesSnapshot manager allows multiple snapshots and ability to move forward and backwards between them

Log files should be used only when you are having trouble with a virtual machine.

VMDK files – VMDK files are the actual hard drive for the virtual machine. Usually you will specify that a virtual machine’s disk can grow as needed. In that case, the VMDK file will be continually growing, up to a size of 2GB. After 2GB, subsequent VMDK files will be created.

VMEM – A VMEM file is a backup of the virtual machine’s paging file. It will only appear if the virtual machine is running, or if it has crashed.

VMSN & VMSD files – these files are used for VMware snapshots. A VMSN file is used to store the exact state of the virtual machine when the snapshot was taken. Using this snapshot, you can then restore your machine to the same state as when the snapshot was taken. A VMSD file stores information about snapshots (metadata). You’ll notice that the names of these files match the names of the snapshots.

NVRAM files – these files are the BIOS for the virtual machine. The VM must know how many hard drives it has and other common BIOS settings. The NVRAM file is where that BIOS information is stored.

VMX files – a VMX file is the primary configuration file for a virtual machine. When you create a new virtual machine and answer questions about the operating system, disk sizes, and networking, those answers are stored in this

Page 35: Vcpfaq

file. As you can see from the screenshot below, a VMX file is actually a simple text file that can be edited with Notepad. Here is the “Windows XP Professional.vmx” file from the directory listing, above:

What are all the files that are located in my virtual machines directory on the ESX server for?*.nvram file – This file contains the CMOS/BIOS for the VM. The BIOS is based off the Phoenix BIOS 4.0 Release 6 and is one of the most successful and widely used BIOS and is compliant with all the major standards, including USB, PCI, ACPI, 1394, WfM and PC2001. If the NVRAM file is deleted or missing it will automatically be re-created when the VM is powered on. Any changes made to the BIOS via the Setup program (F2 at boot) will be saved in this file. This file is usually less then 10K in size and is not in a text format (binary).

vmdk files – These are the disk files that are created for each virtual hard drive in your VM. There are 3 different types of files that use the vmdk extension, they are:

• *–flat.vmdk file - This is the actual raw disk file that is created for each virtual hard drive. Almost all of a .vmdk file's content is the virtual machine's data, with a small portion allotted to virtual machine overhead. This file will be roughly the same size as your virtual hard drive.

• *.vmdk file – This isn't the file containing the raw data anymore. Instead it is the disk descriptor file which describes the size and geometry of the virtual disk file. This file is in text format and contains the name of the –flat.vmdk file for which it is associated with and also the hard drive adapter type, drive sectors, heads and cylinders, etc. One of these files will exist for each virtual hard drive that is assigned to your virtual machine. You can tell which –flat.vmdk file it is associated with by opening the file and looking at the Extent Description field.

• *–delta.vmdk file - This is the differential file created when you take a snapshot of a VM (also known as REDO log). When you snapshot a VM it stops writing to the base vmdk and starts writing changes to the snapshot delta file. The snapshot delta will initially be small and then start growing as changes are made to the base vmdk file, The delta file is a bitmap of the changes to the base vmdk thus is can never grow larger than the base vmdk. A delta file will be created for each snapshot that you create for a VM. These files are automatically deleted when the snapshot is deleted or reverted in snapshot manager.

*.vmx file – This file is the primary configuration file for a virtual machine. When you create a new virtual machine and configure the hardware settings for it that information is stored in this file. This file is in text format and contains entries for the hard disk, network adapters, memory, CPU, ports, power options, etc. You can either edit these files directly if you know what to add or use the Vmware GUI (Edit Settings on the VM) which will automatically update the file.

*.vswp file – This is the VM swap file (earlier ESX versions had a per host swap file) and is created to allow for memory over commitment on a ESX server. The file is created when a VM is powered on and deleted when it is powered off. By default when you create a VM the memory reservation is set to zero, meaning no memory is reserved for the VM and it can potentially be 100% overcommitted. As a result of this a vswp file is created equal to the amount of memory that the VM is assigned minus the memory reservation that is configured for the VM. So a VM that is configured with 2GB of memory will create a 2GB vswp file when it is powered on, if you set a

Page 36: Vcpfaq

memory reservation for 1GB, then it will only create a 1GB vswp file. If you specify a 2GB reservation then it creates a 0 byte file that it does not use. When you do specify a memory reservation then physical RAM from the host will be reserved for the VM and not usable by any other VM’s on that host. A VM will not use it vswp file as long as physical RAM is available on the host. Once all physical RAM is used on the host by all its VM’s and it becomes overcommitted then VM’s start to use their vswp files instead of physical memory. Since the vswp file is a disk file it will affect the performance of the VM when this happens. If you specify a reservation and the host doe’s not have enough physical RAM when the VM is powered on then the VM will not start.

*.vmss file – This file is created when a VM is put into Suspend (pause) mode and is used to save the suspend state. It is basically a copy of the VM’s RAM and will be a few megabytes larger than the maximum RAM memory allocated to the VM. If you delete this file while the VM is in a suspend state It will start the VM from a normal boot up instead of starting the vm from the state it was when it was suspended. This file is not automatically deleted when the VM is brought out of Suspend mode. Like the Vswp file this file will only be deleted when the VM is powered off (not rebooted). If a Vmss file exists from a previous suspend and the VM is suspended again then the previous file is re-used for the subsequent suspensions. Also note that if a vswp file is present it is deleted when a VM is suspended and then re-created when the VM is powered on again. The reason for this is that the VM is essentially powered off in the suspend state, it’s RAM contents are just preserved in the vmss file so it can be quickly powered back on.

*.log file – This is the file that keeps a log of the virtual machine activity and is useful in troubleshooting virtual machine problems. Every time a VM is powered off and then back on a new log file is created. The current log file for the VM is always vmware.log. The older log files are incremented with a -# in the filename and up to 6 of them will be retained. (ie. vmware-4.log) The older .log files are always deleteable at will, the latest .log file can be deleted when the VM is powered off. As the log files do not take much disk space, most administrators let them be.

*.vmxf file – This is a supplemental configuration file in text format for virtual machines that are in a team. Note that the .vmxf file remains if a virtual machine is removed from the team. Teaming virtual machines is a Vmware Workstation feature and includes the ability to designate multiple virtual machines as a team, which administrators can then power on and off, suspend and resume as a single object — making it particularly useful for testing client-server environments. This file still exists with ESX server virtual machines but only for compatibility purposes with Workstation.

*.vmsd file – This file is used to store metadata and information about snapshots. This file is in text format and will contain information such as the snapshot display name, uid, disk file name, etc. It is initially a 0 byte file until you create your first snapshot of a VM and from that point it will populate the file and continue to update it whenever new snapshots are taken. This file does not cleanup completely after snapshots are taken. Once you delete a snapshot it will still leave the fields in the file for each snapshot and just increment the uid and set the name to “Consolidate Helper” presumably to be used with Consolidated Backups.

*.vmsn file - This is the snapshot state file, which stores the exact running state of a virtual machine at the time you take that snapshot. This file will either be small or

Page 37: Vcpfaq

large depending on if you select to preserve the VM’s memory as part of the snapshot. If you do choose to preserve the VM’s memory then this file will be a few megabytes larger then the maximum RAM memory allocated to the VM. This file is similar to the vmss (Suspend) file. A vmsn file will be created for each snapshot taken on the VM, these files are automatically deleted when the snapshot is removed.

Transparent Page Sharing Optimized for NUMA

Many ESX Server workloads present opportunities for sharing memory across virtual machines. For example, several virtual machines may be running instances of the same guest operating system, have the same applications or components loaded, or contain common data. In such cases, ESX Server systems use a proprietary transparent page-sharing technique to securely eliminate unneeded copies of memory pages. With memory sharing, a workload running in virtual machines frequently consumes less memory than it would when running on physical machines. As a result, higher levels of over commitment can be supported efficiently.

Transparent page sharing for ESX Server systems has also been optimized for use on NUMA systems. On NUMA systems, pages are shared per-node, so each NUMA node has its own local copy of heavily shared pages. When virtual machines use shared pages, they don’t need to access remote memory.

Microsoft SMB

Microsoft SMB Protocol and CIFS Protocol Overview (Windows)The Server Message Block (SMB) Protocol is a network file sharing protocol, and as implemented in Microsoft Windows is known as Microsoft SMB Protocol

IBM SMB: Server Message Block protocol Server Message Block (SMB) protocol is an IBM protocol for sharing files, printers, serial ports, etc. between computers.

The SMB 1 protocol often uses 16-bit sizes. SMB2 uses 32 or 64 bits for many of .... Tx........

Centralized RestoreWhen performing a centralized restore, you have a group of virtual machines on ESX Server, a proxy, and a backup agent on the proxy in a dedicated virtual machine that you are planning to use to restore your data. In this case, use the backup software to get the data to the proxy that is running the agent. After the administrator restores the data to the central server, copy it back to the virtual machine using the Common Internet File System (CIFS) remote-access file‐sharing protocol.

Pros: The number of agents to maintain is minimal.Cons: Because data restoration is centralized, an administrator must be involved in file-level restoration.

Per-Group RestoreWhen performing a per-group restoration, one virtual machine has a backup agent for each group, such as accounting, engineering, and marketing. The group administrator restores workflows to a per-group restore host. Files are copied to a target virtual machine using CIFS file share.

Page 38: Vcpfaq

Pros: Restorations can be delegated. This type of restoration is a good compromise between the number of agents and ease of restoration.Cons: This process is not a complete self-service restoration.

Self-Service RestoreBackup agents are deployed in every virtual machine. The user can use the agent to back up data to a tape and restore the same way. The backup agent in the virtual machine is used to restore the data.Pros: This process is a self-service restoration.Cons: Agents are required in each virtual machine.

Clustering Virtual Machines on a Single Host (Cluster in a Box)

A cluster in a box consists of two clustering virtual machines on a single physical machine. A cluster in a box supports two virtual machines on the same ESX Server host connected to the same storage (either local or remote).This scenario supports simple clustering for dealing with software or administration errors, as well as failures in the guest operating system. It cannot protect you in case of hardware failures. It can also be useful for testing cross-host clustering before distributing the virtual machines across multiple hosts.

Clustering Virtual Machines Across Physical Hosts (Cluster Across Boxes)

A cluster across boxes consists of virtual machines on different physical machines. In this scenario, the storage is on a shared physical device, so both virtual machines can access the data. If either the virtual machine or the physical machine on Node1 becomes unavailable, the data is still available from the virtual machine on Node2. Using this type of cluster, you can deal with the hardware failure on the physical machine.

Clustering Multiple Virtual Machines Across Multiple Physical Hosts

You can expand the cluster-across-boxes model and place multiple virtual machines on multiple physical machines. For example, you can consolidate four clusters of two physical machines each to two physical machines with four virtual machines each. This setup protects you from both hardware and software failures. At the same time, this setup results in significant hardware cost savings.

Clustering Physical Machines and Virtual Machines (Standby Host)

For a simple clustering solution with low hardware requirements, you might choose to have one standby host. In that case, set up your system to have a virtual machine corresponding to each physical machine on the standby host. In case of hardware failure in one of the physical machines, the virtual machine on the standby host can take over for that physical host.

Roles

Page 39: Vcpfaq

VirtualCenter and ESX Server grant access to objects only to users who have been assigned permissions for the object. When you assign a user or group permissions for the object, you do so by pairing the user or group with a role. A role is a predefined set of privileges.

VirtualCenter and ESX Server hosts provide default roles:

System roles – System roles are permanent and the privileges associated with these roles cannot be changed.

!Sample roles – Sample roles are provided for convenience as guidelines and suggestions. These roles can be modified or removed.

Role Role Type Description User Capabilities

No Access User system cannot view or Change the assigned object.

VI Client tabs associated with an object display without content.This is the default role for all users except those users in the Administrators group.

Read Only User system View the state and details about the object.

View all the tab panels in the VI Client except the console tab. Cannot perform any actions through the menus and toolbars.

Administrator system All privileges for all objects.Add, remove, and set access rights and privileges for all the Virtual Center users and all the virtual objects in the VMware Infrastructure environment. This is the default role for all members of the Administrators group.

Virtual Machine User sample Perform actions on virtual machines only.Interact with virtual machines, but not change the virtual machine configuration. This includes:All privileges for the scheduled tasks privileges group.Selected privileges for the global items and virtual machine privileges groups.

No privileges for the folder, datacenter, datastore,network, host, resource, alarms, sessions,performance, and permissions privileges groups.

Page 40: Vcpfaq

Virtual Machine Power User sample Perform actions on the virtual machine

and resource objects.Interact and change most virtual machineconfiguration settings, take snapshots, and schedule tasks. This includes:All privileges for scheduled task privileges group.Selected privileges for global items, data store, and virtual machine privileges groups.No privileges for folder, datacenter, network, host, resource, alarms, sessions, performance, and permissions privileges groups.

Resource Pool sample Perform actions on datastores, hosts, virtual

machines, resources, and alarms.Provides resource delegation and is assigned to resource pool inventory objects. This includes:All privileges for folder, virtual machine, alarms, and scheduled task privileges groups.Selected privileges for global items, datastore, resource, and permissions privileges groups.No privileges for datacenter, network,

host,sessions, or performance privileges

groups.Administrator

Datacenter sample Perform actions on global items, folders,

Administrator datacenters, datastores, hosts, virtual machines, resources, and alarms.Set up datacenters, but with limited ability to interact with virtual machines. This includes:All privileges for folder, datacenter, datastore,network, resource, alarms, and scheduled task privileges groups.Selected privileges for global items, host, and virtual machine privileges groups.No privileges for session, performance,

andpermission privileges groups.

Virtual Machine Administrator sample Perform actions on global items,

folders,

Page 41: Vcpfaq

datacenters, data stores, hosts, virtual machines, resources, alarms, and sessions. This includes:All privileges for all privilege groups, except permissions.

vpxuser – This user is Virtual Center acting as an entity with Administrator rights on the ESX Server host, allowing it to manage activities for that host. vpxuser is created at the time that an ESX Server host is attached to Virtual Center. It is not present on the ESX Server host unless the host is being managed through Virtual Center.

When an ESX Server host is managed through Virtual Center, Virtual Center has privileges on the host. For example, Virtual Center can move virtual machines to and from hosts and perform configuration changes needed to support virtual machines.

The Virtual Center administrator, through vpxuser, can perform most of the same tasks on the host as the root user and also schedule tasks, work with templates, and so forth. However, there are certain activities you cannot perform as a VirtualCenter administrator. These activities, which include directly creating, deleting, or editing users and groups for ESX Server hosts, can be performed only by a user with Administer permissions directly on each ESX Server host.

root – The root user can perform a complete range of control activities on the specific ESX Server host that he or she is logged on to, including manipulating permissions, creating groups and users, working with events, and so forth. A root user logged on to one ESX Server host cannot control the activities of any other host in the broader ESX Server deployment.

VMkernel

A high-performance operating system that occupies the virtualization layer and manages most of the physical resources on the hardware, including memory, physical processors, storage, and networking controllers.