mythbusters dutch vmug_2012
DESCRIPTION
Some things never change. Or do they? VMware vSphere is getting new and improved features with every release. And these features change the characteristics and performance of the virtual machines. If you are not up to speed, you will probably manage your environment relying on old, no-longer-accurate information. The vMythbusters have collected a series of interesting hot topics that we have seen widely discussed in virtualization communities, on blogs and on Twitter. We’ve put these topics to the test in our lab to determine if they are a myth or truth.TRANSCRIPT
MYTHBUSTING GOES VIRTUAL
MATTIAS SUNDLING ERIC SLOOF
MYTHBUSTING GOES VIRTUAL
Ma3as Sundling Evangelist Dell So@ware @msundling
Eric Sloof VMware CerHfied Instructor NTPRO.NL @esloof
INTRODUCTION
• VMware vSphere evolves with every release • Things that used to be true aren't true anymore • Engage in virtualizaHon communiHes and social
media to get up to speed
AGENDA/MYTHS
1. VMware HA works out-‐of-‐the-‐box 2. VMware snapshots impacts performance 3. Disk provisioning type doesn’t affect performance 4. Always use VMware tools to sync the Hme in your VM
VMware HA works out-‐of-‐the-‐box
MYTH 1
MOST CONFIGURED ADMISSION CONTROL POLICY
ENABLING VMWARE HIGH AVAILABILITY
HOST FAILURES A CLUSTER TOLERATES
ESX01 ESX02 ESX03
Shared storage – vm.vmdk
DEFAULT MINIMUM SLOT SIZE
• If you have not specified a CPU reservaHon for a virtual machine, it is assigned a default value of 32MHz.
• When the memory reservaHon is 0, the slot size equals the virtual machine overhead.
32 MHz 69 MB
VM1 VM2 VM3 VM4 VM..n
SLOT SIZE BASED ON RESERVATION
• vSphere HA calculates the CPU and memory slot size by obtaining the largest CPU and memory reservaHon of each powered-‐on virtual machine.
512 MHz 1093 MB
VM1 VM2 VM3 VM4 VM…n
HA ADVANCED SETTINGS
• das.slotcpuinmhz • das.vmcpuminmhz
Memory reservation
CPU reservation
SLOT
SLOT
• das.slotmeminmb • das.vmmemoryminmb
SPECIFY A FIXED SLOT SIZE EXPLICITLY
VMS REQUIRING MULTIPLE SLOTS
512 MHz 512 MB
VM1 VM2 VM3 VM4 VM5 VM6
Reservation
Slot size
• You can also determine the risk of resource fragmentation in your cluster by viewing the number of virtual machines that require multiple slots.
• VMs might require multiple slots if you have specified a fixed slot size or a maximum slot size using advanced options.
FRAGMENTED FAILOVER CAPACITY
ESX1 ESX2 ESX3
Shared storage – vm.vmdk
WORST CASE SCENARIO
ESX01 3.6 GHz 16 GB
ESX02 3.6 GHz 16 GB
ESX03 3.6 GHz 32 GB
Shared storage – vm.vmdk
KEEP HOSTS THE SAME SIZE
Host memory: 3 * 16 GB Host memory: 2 * 16 GB 1 * 32 GB
PERCENTAGE OF CLUSTER RESOURCES RESERVED
ESX01 ESX02 ESX03
Shared storage – vm.vmdk
PERCENTAGE RESERVED AS FAILOVER CAPACITY
ADMISSION CONTROL BASED ON RESERVATIONS
• vSphere HA uses the actual individual reservations of the virtual machines.
• The CPU component by summing the CPU reservations of the powered-on VMs.
COMPUTING THE CURRENT FAILOVER CAPACITY
• If you have not specified a CPU reservation for a VM, it is assigned a default value of 32MHz
RESOURCES RESERVED IS NOT UTILIZATION
• The Current CPU Failover Capacity is computed by subtracting the total CPU resource requirements from the total host CPU resources and dividing the result by the total host CPU resources.
PERCENTAGE RESERVED ADVANCED SETTING
• The default CPU reservation for a VM can be changed using the das.vmcpuminmhz advanced attribute
• das.vmmemoryminmb defines the default memory resource value assigned to a VM
WHAT ABOUT THE WEB CLIENT
SPECIFY FAILOVER HOSTS ADMISSION CONTROL POLICY
ESX01 ESX02 ESX03
Shared storage – vm.vmdk
SPECIFY FAILOVER HOSTS ADMISSION CONTROL POLICY
• Configure vSphere HA to designate specific hosts as the failover hosts
THE FAILOVERHOST
To ensure that spare capacity is available on a failover host, you are prevented from powering on virtual machines or using vMotion to migrate VMs to a failover host. Also, DRS does not use a failover host for load balancing If you use the Specify Failover Hosts admission control policy and designate multiple failover hosts, DRS does not attempt to enforce VM-VM affinity rules for virtual machines that are running on failover hosts.
STATUS OF THE CURRENT FAILOVER HOSTS
Red - The host is disconnected, in maintenance mode, or has vSphere HA errors.
Green - The host is connected, not in maintenance mode, and has no vSphere HA errors. No powered-on VMs reside on the host.
Yellow - The host is connected, not in maintenance mode, and has no vSphere HA errors. However, powered-on VMs reside on the host.
MYTH BUSTED
• VMware High Availability needs to be configured • Be careful with reservaHons • Always check run-‐Hme informaHon
VMware snapshots impacts performance
MYTH 2
WHAT IS A SNAPSHOT?
• Preserves state and data of a VM at a specific point in Hme
• Data includes virtual disks, se3ngs, memory (opHonally) • Allows you to revert to a previous state • Typically used by VM admins when doing changes and
by backup so@ware • ESX3, ESX(i)4 had issues with deleHng snapshots • ESXi5 improved snapshot consolidaHon
WHAT IS A SNAPSHOT?
File Descrip<on
.vmdk Original virtual disk
delta.vmdk Snapshot delta disk
.vmsd DB file with relaHons between snapshots
.vmsn Memory file
• Snapshot grows in 16MB chunks – Requires locking
LOCKS
• Locks are necessary when creaHng, deleHng and growing snapshot, power on/off, create VMDK
• ESX(i)4 used SCSI-‐2 reservaHon – Locks enHre LUN
LOCKS
• ESXi5 uses Atomic Test & Set (ATS) VAAI primiHve – Locks only individual VM – Requires VAAI enabled array and VMFS-‐5
PERFORMANCE
• Locking – ATS increase performance up to 70% compared to SCSI-‐2 reservaHon
• Normal operaHons – Snapshot age – Number of snapshots – Snapshot size
• Be careful with snapshots in produc<on!
• Improvements to snapshots management and locking • Snapshots sHll have impact on performance
MYTH NOT BUSTED
Disk provisioning type doesn’t affect performance
MYTH 3
DISK TYPES
BLOCK ALLOCATION
VMDK
Block Block Block
VMDK File Size
Wrimen Blocks Thick Provision Lazy Zeroed
VMDK
Block Block Block
VMDK File Size
Wrimen Blocks Thin Provision
VMDK
Block Block Block
VMDK File Size
Wrimen Blocks Thick Provision Eager Zeroed
VMDK VMDK
THE ISCSI LABORATORY
• Iomega StorCenter px6-‐300d with 6 SATA 7200 Disks
• Windows 2008 R2 4096 MB – 1 vCPU Hardware Version 9
• VMware vSphere 5.1 • Single Intel 1GB Ethernet • Cisco 2960 switch
MTU Size 1500
3 DIFFERENT DISKS • Thick Provision Lazy Zeroed
• Thin Provision
• Thick Provision Eager
Zeroed
THICK PROVISION LAZY ZEROED
Average Write 13.3 MB/s - Access time: 44.8 ms
THIN PROVISION
Average Write 13.7 MB/s - Access time: 46.8 ms
THICK PROVISION EAGER ZEROED
Average Write 86.6 MB/s - Access time: 9.85 ms
COMPARISION
Average Write 13.3 MB/s - Access time: 44.8 ms
Average Write 13.7 MB/s - Access time: 46.8 ms
Average Write 86.6 MB/s - Access time: 9.85 ms
THICK PROVISION LAZY ZEROED
THIN PROVISION
THICK PROVISION EAGER ZEROED
MIGRATION
• Storage vMoHon is able to migrate the disk format of a Virtual Machine
MYTH BUSTED
• Thin and Lazy Zeroed disks have the same speed • Once allocated, these disks are as fast as Zeroed disks • Thick Provision Eager Zeroed offer best performance
from first write on
Always use VMware tools to sync the Hme in your VM
MYTH 4
TIME SYNC PROBLEMS
• VMs have not access to naHve physical HW Hmers
• Scheduling can cause Hme to fall behind • CPU / Memory overcommit increases risk • People are mixing different Hme sync opHons
VMWARE TOOLS
• ESX(i) 4 and prior – not possible to adjust Hme backwards
• ESXi 5 – Improved Hme sync to be more accurate and can also adjust Hme backwards
• Enable/Disable periodic sync in VMware Tools GUI, vCenter or VMX file
VMWARE TOOLS
• Default periodic sync interval is 60 sec • Sync is forced even when periodic sync is disabled:
– Resume, Revert Snapshot, Disk Shrink and vMoHon
• In order to disable completely configure vmx file – TesHng scenarios tools.syncTime = FALSE Hme.synchronize.conHnue = FALSE Hme.synchronize.restore = FALSE Hme.synchronize.resume.disk = FALSE Hme.synchronize.shrink = FALSE Hme.synchronize.tools.startup = FALSE Hme.synchronize.resume.host = FALSE
GUEST OS SERVICES
• Windows (W32Time service) – Windows 2000 uses SNTP – Windows 2003+ uses NTP and provides bemer sync opHons and accuracy
– Domain joined VMs sync from DC – Use Group Policy to control se3ngs
• Linux (NTP) – Configure ntpd.conf – Start ntpd
• chkconfig ntpd on • /etc/init.d/ntpd start
BEST PRACTICES
• ESX(i) hosts: – Configure mulHple NTP servers – Start NTP Service
• Virtual Machines: – Disable VMware Tools periodic sync – DC: Configure mulHple NTP servers (same as ESX(i) host)
– Domain joined will sync with DC – If not domain joined then configure W32Time or NTP manually
• Do not use both VMware Tools periodic sync and Guest OS <me sync simultaneously!
MYTH BUSTED
• Use W32Time or NTP • Do not use VMware Tools period sync
SUMMARY
• Myth 1: VMware High Availability needs to be configured, be careful with reservaHons and always check run-‐Hme informaHon
• Myth 2:Improvements to snapshot management and locking but sHll performance impact
• Myth 3: Use Thick Eager Zeroed disks for best I/O performance
• Myth 4: Use W32Time or NTP to sync Hme instead of VMware Tools
VMWORLDTV
• hmp://www.youtube.com/VMworldTV
QUESTIONS
MaJas Sundling Evangelist Dell So@ware [email protected], @msundling
Eric Sloof VMware CerHfied Instructor NTPRO.NL [email protected], @esloof