whats new vmware vsphere 50 performance technical whitepaper
TRANSCRIPT
-
7/31/2019 Whats New VMware vSphere 50 Performance Technical Whitepaper
1/11
Whats New in Performancein VMware vSphere 5.0T E C H N I C A L W H I T E P A P E R
-
7/31/2019 Whats New VMware vSphere 50 Performance Technical Whitepaper
2/11
T E C H N I C A L W H I T E P A P E R / 2
Whats New in Performance inVMware vSphere 5.0
IntroductionVMware vSphere 5.0 (vSphere), the best VMware solution or building cloud inrastructures, pushes urther
ahead in perormance and scalability. vSphere 5.0 enables higher consolidation ratios with unequaledperormance. It supports the build-out o private and hybrid clouds at even lower operational costs than beore.
The ollowing are some o the perormance highlights:
VMware vCenter Server scalability
Fasterhigh-availability(HA)congurationtimes
Fasterfailoverrates60%morevirtualmachineswithinthesametime
Lowermanagementoperationallatencies
Highermanagementoperationsthroughput(Ops/min)
Compute
32-wayvCPUscalability
1TBmemorysupport
Storage
vSphereStorageI/OControl(StorageI/OControl)nowsupportsNFSSetstoragequalityofservice
priorities per virtual machine or better access to storage resources or high-priority applications
Network
vSphereNetworkI/OControl(NetworkI/OControl)Givesahighergranularityofnetworkloadbalancing
vSphere vMotion
vSpherevMotion(vMotion)MultinetworkadaptorenablementthatcontributestoanevenfastervMotion
vSphereStoragevMotion(StoragevMotion)Fast,livestoragemigrationwithI/Omirroring
VMware vCenter Server and ScalabilityEnhancements
Virtual Machines per Cluster
Even with the increased number o supported virtual machines per cluster, vSphere 5.0 delivers a higher
throughput(Ops/min)ofmanagementoperationsupto120%,dependingontheintensityoftheoperations.
Examples o management operations include virtual machine power-on, virtual machine power-o, virtual
machinemigrate,virtualmachineregister,virtualmachineunregister,createfolder,andsoon.Operationlatency
improvementsrangefrom25%to75%,dependingonthetypeofoperationandthebackgroundload.
HA (High Availability)
SignicantadvanceshavebeenmadeinvSphere5.0toreducecongurationtimeandimprovefailoverperformance.A32-host,3,200virtualmachine,HA-enabledclustercanbecongured9xfasterinvSphere5.0.
Inthesameamountoftimeaswithpreviouseditions,vSphere5.0canfailover60%morevirtualmachines.The
minimalrecoverytime,fromfailuretotherstvirtualmachinerestarts,hasbeenimprovedby44.4%.The
averagevirtualmachinefailovertimehasimprovedby14%.
Inaddition,invSphere5.0,thedefaultCPUslotsizethatis,spareCPUcapacityreservedforeachHA-protected
virtualmachineissmallerincomparisontoitsvalueinvSphere4.1,toenableahigherconsolidationratio.Asa
result,DistributedPowerManagement(DPM)mightbeabletoputmorehostsintostandbymodewhenthe
clusterutilizationislow,leadingtomorepowersavings.
-
7/31/2019 Whats New VMware vSphere 50 Performance Technical Whitepaper
3/11
T E C H N I C A L W H I T E P A P E R / 3
Whats New in Performance inVMware vSphere 5.0
VMware ESX VMware ESX VMware ESXi
VM VM
Resource Pool
Failed ServerOperating Server Operating Server
VMVM VMVM
Figure 1. vSphere 5.0 High Availability
CPU Enhancements
32-vCPU performance and scalability
VMwarevSphere5.0cannowsupportupto32vCPUs.UsingalargenumberofvCPUsinavirtualmachinecan
potentiallyhelplargescale,missioncritical,Tier1applicationsachievehigherperformanceandthroughput
and/orfasterruntimeforcertainapplications.
TheVMwarePerformanceEngineeringlabconductedavarietyofexperiments,includingcommercialTier1and
HPC(high-performancecomputing)applications.Performancewasobservedclosetonative,9297%,as
virtualizedapplicationsscaledto32vCPUs.
Intel SMTrelated CPU scheduler enhancements
IntelSMTarchitectureexposestwohardwarecontextsfromasinglecore.Thebenetofutilizingtwohardware
contextsrangesfrom10%to30%inimprovedapplicationperformance,dependingontheworkloads.InvSphere
5.0,theVMwareESXiCPUschedulerspolicyistunedforthistypeofarchitecturetobalancebetweenmaximum
throughputandfairnessbetweenvirtualmachines.Inadditiontothenumberofperformanceoptimizations
madearoundSMTCPUschedulinginvSphere4.1(runningtheVMwareESXihypervisorarchitecture),wehave
urther enhanced the SMT scheduler in VMware ESXi 5.0 to ensure high efciency and perormance or
mission-critical applications.
Virtual NUMA (vNUMA)
VirtualNUMA(vNUMA)exposeshostNUMAtopologytotheguestoperatingsystem,enablingNUMA-aware
guestoperatingsystemsandapplicationstomakethemostecientuseoftheunderlyinghardwaresNUMA
architecture.
VirtualNUMA,whichrequiresvirtualhardwareversion8,canprovidesignicantperformancebenetsfor
virtualizedoperatingsystemsandapplicationsthatfeatureNUMAoptimization.
-
7/31/2019 Whats New VMware vSphere 50 Performance Technical Whitepaper
4/11
T E C H N I C A L W H I T E P A P E R / 4
Whats New in Performance inVMware vSphere 5.0
Storage Enhancements
NFS support for Storage I/O ControlvSphere5.0StorageI/OControlnowsupportsNFSandnetwork-attachedstorage(NAS)basedshareson
shareddatastores.ItnowprovidesthefollowingbenetsforNFS:
DynamicallyregulatesmultiplevirtualmachinesaccesstosharedI/Oresources,basedondisksharesassigned
to the virtual machines.
Helpsisolateperformanceoflatency-sensitiveapplicationsthatemploysmaller(
-
7/31/2019 Whats New VMware vSphere 50 Performance Technical Whitepaper
5/11
T E C H N I C A L W H I T E P A P E R / 5
Whats New in Performance inVMware vSphere 5.0
Memory Management Enhancements
1TB vMem supportInvSphere5.0,virtualmachinescannowsupportvMemupto1TB.Withthenew1TBvMemsupport,Tier1
applicationsconsuminglargeamountsofmemory(>255GB),suchasin-memorydatabases,cannowbe
virtualized.Experimentalmeasurementswithmemoryintensiveworkloadsforvirtualmachineswithupto
1TBvMemdemonstratesimilarperformancewhencomparedtoidenticalphysicalcongurations.
SSD Swap Cache
vSphere5.0canbeconguredtoenableVMwareESXihostswappingtoasolid-statedisk(SSD).Inthelowhost
memoryavailablestates(highmemoryusage),whereguestballooning,transparentpagesharing(TPS)and
memory compression have not been sufcient to reclaim the needed host memory, hypervisor swapping is used
as the last resort to reclaim memory rom the virtual machines. vSphere employs three methods to address
limitations o hypervisor swapping to improve hypervisor swapping perormance:
Randomlyselectingthevirtualmachinephysicalpagestobeswappedout.ThishelpsmitigatetheimpactofVMwareESXipathologicallyinteractingwiththeguestoperatingsystemsmemorymanagementheuristics.
This has been enabled rom early releases o VMware ESX.
MemorycompressionofthevirtualmachinepagesthataretargetedbyVMwareESXitobeswappedout.This
feature,introducedinvSphere4.1,reducesthenumberofmemorypagesthatmustbeswappedouttothedisk,
whilereclaiminghostmemoryeectivelyatthesametimeandtherebybenetingapplicationperformance.
vSphere5.0nowenablesuserstochoosetocongureaswapcacheontheSSD.VMwareESXi5.0willthenuse
this swap cache to store the swapped-out pages instead o sending them to the regular and slower hypervisor
swapleonthedisk.Uponthenextaccesstoapageintheswapcache,thepagewillberetrievedquicklyfrom
thecacheandthenremovedfromtheswapcachetofreeupspace.BecauseSSDreadlatenciesareanorderof
magnitudefasterthantypicaldiskaccesslatencies,thissignicantlyreducestheswap-inlatenciesandgreatly
improves the application perormance in high memory over commitment scenarios.
SSD Swap Cache
Flash
Page Table
0
1
1
0
1
0
0
1
Main Memory
Figure 3. SSD Swap Cache
-
7/31/2019 Whats New VMware vSphere 50 Performance Technical Whitepaper
6/11
T E C H N I C A L W H I T E P A P E R / 6
Whats New in Performance inVMware vSphere 5.0
Network Enhancements
Network I/O ControlNetworkI/OControl(NIOC)enablestheallocationofnetworkbandwidthtoresourcepools.InvSphere5.0,userscan
createnewresourcepoolstoassociatewithportgroupsandspecify802.1ptags,whereasinvSphere4.1,only
predenedresourcepoolscouldbeusedforbandwidthallocation.Sodierentvirtualmachinescannowbein
dierentresourcepoolsandbegivenahigherorlowershareofbandwidththantheothers.Thepredenedresource
groupsincludevMotion,NFS,iSCSI,vSphereFaultTolerance(FaultTolerance),virtualmachine,andmanagement.
splitRxMode
vSphere5.0introducesanewwayofdoingnetworkpacketreceiveprocessingintheVMkernel.Inprevious
releasesofvSphere,allreceivepacketprocessingforanetworkqueuewasdoneinasinglecontext,whichmight
be shared among various virtual machines. In cases where there is a higher density o virtual machines per
networkqueue,itispossibleforthecontexttobecomeresourceconstrained.Multicastisonesuchworkload
wheremultiplereceivervirtualmachinesshareonenetworkqueue.vSphere5.0includesanewmechanismtosplitthecostofreceivepacketprocessingtomultiplecontexts.SplitRxModecanspecifyforeachvirtual
machinewhetherreceivepacketprocessingshouldbeinthenetworkqueuecontextoraseparatecontext.
WecalledthismodesplitRxMode,anditcanbeenabledonaper-vNICbasisbyeditingtheVMXleofthevirtual
machinewithethernetX.emuRxMode=1fortheEthernetdevice.
NOTE: This confguration is available only with vmxnet3. Once the option is enabled, the packet processing or
the virtual machine is handled by a separate context. This improves the multicast perormance signifcantly.
However, there is an overhead associated with splitting the cost among various PCPUs. This cost can be
attributed to higher CPU utilization per packet processed, so VMware recommends enabling this option or only
those workloads, such as multicast, that are known to beneft rom this eature.
VMware perormance labs conducted a perormance study or multiple receivers with the sender transmitting
multicastpacketsat16,000packetspersecond.Aseachadditionalreceiverwasadded,therewasanincrease
inpacketshandledbythenetworkingcontext.Aftertheloadofthecontextreacheditsmaximum,thereweresignicantpacketlosses.ByenablingsplitRxModeonthesevirtualmachines,theloadonthecontextwas
decreasedandtherebyincreasingthenumberofmulticastreceivershandledwithoutsignicantdrops.Figure4
showsthatthereisnonoticeabledierenceinlossrateforthesetupuntilthereare24virtualmachinespowered
on.Aftertherearemorethan24virtualmachinespoweredon,thereisanalmost1025%packetloss.Enabling
splitRxModeforthevirtualmachinesreducesthelossratetolessthan0.01%.
Multicast Performance
Default
splitRxMode
PercentPack
etLoss
30
25
20
15
10
5
0
1 2 4 8 12 16 24 28 32
Number of Receivers
Figure 4. Multicast Perormance
-
7/31/2019 Whats New VMware vSphere 50 Performance Technical Whitepaper
7/11
T E C H N I C A L W H I T E P A P E R / 7
Whats New in Performance inVMware vSphere 5.0
vSphere DirectPath I/O
vSphereDirectPathI/O(DirectPathI/O)enablesguestoperatingsystemstodirectlyaccessnetworkdevices.
DirectPathI/OforvSphere5.0hasbeenenhancedtoallowthevMotionofavirtualmachinecontaining
DirectPathI/OnetworkadaptorsontheCiscoUniedComputingSystem(UCS)platform.Onsuchplatforms,with proper setup and ports available or passthrough, vSphere 5.0 can transition the device rom one that is
paravirtualizedtoonethatisdirectlyaccessed,andtheotherwayaround.Whilebothmodescansustainhigh
throughput(beyond10Gbps),directaccesscanadditionallysaveCPUcyclesinworkloadswithhighpacketrates
(forexample,approximatelygreaterthan50kps).
BecauseDirectPathI/OdoesnotsupportsomevSpherefeaturesmemoryovercommitmentandNIOC
VMwarerecommendsusingDirectPathI/Oonlyforworkloadswithveryhighpacketrates,wheresavingCPU
cycles can be critical to achieving the desired perormance.
Host Power Management Enhancements
VMwareHostPowerManagement(VMwareHPM)invSphere5.0providespowersavingsworkingatthehostlevel.WithvSphere5.0,thedefaultpowermanagementpolicyisbalanced.Thebalancedpolicyusesan
algorithmthatexploitsaprocessorsP-states.WorkloadsoperatingunderlowCPUloadscanexpecttosee
somepowersavings(dependingentirelyontheworkloadCPUusagecharacteristics)withminimalperformance
loss(dependentonapplicationCPUusagecharacteristics).VMwarestronglyadvisesperformance-sensitive
userstoconductcontrolledexperimentstoobservethepowersavings/performanceoftheirapplicationwith
vSphere 5.0. I the perormance loss is unacceptable, switch VMware ESX to the high perormance mode. Most
oftheworkloadsoperatingunderlowloadscanseeupto5%inpowersavingswhenusingbalancedmode.
-
7/31/2019 Whats New VMware vSphere 50 Performance Technical Whitepaper
8/11
T E C H N I C A L W H I T E P A P E R / 8
Whats New in Performance inVMware vSphere 5.0
VMware vMotion
Multinetwork adaptor enablement for vMotionThenewperformanceenhancementsinvSphere5.0enablevMotiontoeectivelysaturatea10GbEnetwork
adaptorbandwidthduringthemigration,signicantlyreducingvMotiontransfertimes.Inaddition,VMwareESX
5.0addsamulti-networkadaptorfeaturethatenablesuserstoemploymultiplenetworkadaptorsforvMotion.
TheVMkernelwilltransparentlyload-balancethevMotiontracoverallthevMotion-enabledvmknicsinan
eorttosaturatealloftheconnections.Infact,evenwhenthereisasinglevMotion,VMkernelusesallthe
availablenetworkadaptorstospreadthevMotiontrac.
Figure5summarizesthevMotionperformanceenhancementsinvSphere5.0overvSphere4.1.Thetest
environmentmodeledbothasingle-instanceSQLservervirtualmachine(virtualmachineconguredwithfour
vCPUsand16GBmemoryandrunningOLTPworkload)andmultiple-instanceSQLservervirtualmachine(each
virtualmachineconguredwithfourvCPUsand16GBmemoryandrunningOLTPworkload)deployment
scenariosforthefollowingcongurations:
vSphere SourceDestination hosts congured with a single GbE port for vMotion
vSphere SourceDestination hosts congured with a single GbE port for vMotion
vSphere SourceDestination hosts congured with two GbE ports for vMotion
vSphere 4.1
vSphere 5
vSphere 5 (2 NICs)
1 Virtual Machine
70
60
50
40
30
20
10
0
2 Virtual Machines
Timein
seconds
Figure 5. vMotion Perormance in vSphere 4.1 and vSphere 5.0
ThegureclearlyshowstheenhancementsmadeinvSphere5.0toreducethetotalelapsedtimeinboth
singlevirtualmachineandmultiple-instanceSQLservervirtualmachinedeploymentscenarios.TheaveragepercentagedropinvMotiontransfertimewasalittleover30%invSphere5.0whenusingasingle10GbE
networkadaptor.Whenusingtwo10GbEnetworkadaptorsforvMotion(enabledbythenewmultinetwork
adaptorfeatureinvSphere5.0),thetotalmigrationtimereduceddramatically:afactorof2.3ximprovement
overthevSphere4.1result.Thegurealsoillustratesthefactthatthemultinetworkadaptorfeature
transparentlyloadbalancesthevMotiontracontomultiplenetworkadaptors,eveninthecasewhereasingle
virtual machine is subjected to vMotion. This eature can be especially handy when a virtual machine is
conguredwithalargeamountofmemory.
-
7/31/2019 Whats New VMware vSphere 50 Performance Technical Whitepaper
9/11
T E C H N I C A L W H I T E P A P E R / 9
Whats New in Performance inVMware vSphere 5.0
Metro vMotion
vSphere 5.0 introduces a new latency-aware Metro vMotion eature that provides better perormance over
longlatencynetworksandalsoincreasestheround-triplatencylimitforvMotionnetworksfrom5milliseconds
to10milliseconds.Previously,vMotionwassupportedonlyonnetworkswithround-triplatenciesofupto5 milliseconds.
VMware Storage vMotion (Live Migration) Enhancements
vSphere5.0nowsupportslivemigrationforstoragewiththeuseofI/Omirroring.Inpreviousreleases,live
migration moved memory and device state only or a virtual machine, limiting migration to hosts with identical
sharedstorage.Livestoragemigrationovercomesthislimitationbyenablingthemovementofvirtualdisks
spanningvolumesanddistance.Thisenablesgreatervirtualmachinemobility,zerodowntimemaintenanceand
upgrades o storage elements, and automatic storage load balancing.
VMwarehasbeenabletogreatlyreducethetimeittakesforthestoragetomigrate,startingwiththeuseof
snapshotsinVirtualInfrastructure3.5,movingtousingdirtyblocktrackinginvSphere4.0,andnowtousingI/O
mirroring in vSphere 5.0.
Livestoragemigrationhasthefollowingadvantages.
Zerodowntimemaintenance Enables customers to move virtual machines on and o storage volumes,
upgradestoragearrays,performlesystemupgrades,andservicehardwarewithoutpoweringdown
virtual machines.
Manualandautomaticstorageloadbalancing Customerscancontinuetomanuallyload-balancetheir
vSphereclusterstoimprovestorageperformance,ortheycantakeadvantageofautomaticstorageload
balancing(vSphereStorageDRS),whichisnowavailableinvSphere5.0.
Livestoragemigrationincreasesvirtualmachinemobility Virtual machines are no longer pinned to the
storagearraytheyareinstantiatedon.Livemigrationworksbycopyingthememoryanddevicestateofa
virtual machine rom one host to another with negligible virtual machine downtime.
VMware ESXi & ESX
VM VM VMVM
Storage vMotion
Figure 6. vSphere 5.0 Storage vMotion
-
7/31/2019 Whats New VMware vSphere 50 Performance Technical Whitepaper
10/11
T E C H N I C A L W H I T E P A P E R / 1 0
Whats New in Performance inVMware vSphere 5.0
Tier 1 Application Performance
Microsoft Exchange Server 2010MicrosoftExchangeServer2010signicantlybenetsfromthesuperiorperformanceandscalabilityprovidedby
vSphere5.0.MicrosoftExchangeLoadGenerator2010benchmarkresultsindicatethatvSphere5.0improves
Microsot Exchange transaction response times and vMotion and Storage vMotion migration times when
compared with the previous vSphere release.
ThefollowingareexamplesofperformanceandscalabilitygainsobservedincomparisontovSphere4.1:
6%reductioninSendMail95thpercentiletransactionlatencieswithafour-vCPUExchangeMailBoxserver
virtual machine
13%reductioninSendMail95thpercentiletransactionlatencieswithaneight-vCPUExchangeMailBoxserver
virtual machine
33%reductioninvMotionmigrationtimeforafour-vCPUExchangeMailBoxservervirtualmachine
11%reductioninStoragevMotionmigrationtimefora350GBExchangedatabaseVMDK
ZimbraMailServerPerformance
Thefull-featured,robustandopen-sourceZimbraCollaborationSuite(ZCS)includesemail,calendaring,and
collaboration sotware. Zimbra is an important part o the VMware cloud computing portolio, as sotware as a
service(SaaS)andasaprepackagedvirtualapplianceforSMBs.
ForvSphere5.0,virtualizedZCSperformswithin95%ofnativeperformancevirtualmachine,scalinguptoeight
vCPUs.Uptoeight4-vCPUZCSvirtualmachinessupporting32,000mailuserswerescaledoutonasinglehost
withjusta10%increaseinSendmaillatenciescomparedtoasinglevirtualmachine.
Inrecent4-vCPUexperiments,ZCShaslessthanhalftheCPUconsumptionofExchange2010withthesame
numberofusers,andmailuserprovisiontimeforZCSwas10xfaster.
Zimbra MTA+LDAP/Exchange 2010 CAS+HUB
Zimbra Database/Exchange 2010 Indexing
Mailbox
%C
PUUtilization
60
50
40
30
20
10
0Zimbra Mailbox Server Exchange 2010
Figure 7. Zimbra CPU Efciency over Exchange 2010
-
7/31/2019 Whats New VMware vSphere 50 Performance Technical Whitepaper
11/11
Whats New in Performance inVMware vSphere 5.0
VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2011 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at
http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be
trademarks of their respective companies Item No: VMW-WP-PERFORMANCE-USLET-101
SummaryVMware innovations continue to ensure that VMware vSphere 5.0 pushes the envelope o perormance and
scalability.ThenumerousperformanceenhancementsinvSphere5.0enableorganizationstogetevenmoreoutoftheirvirtualinfrastructureandfurtherreinforcetheroleofVMwareasindustryleaderinvirtualization.
vSphere 5.0 represents advances in perormance, to ensure that even the most resource-intensive applications,
such as large databases and Microsot Exchange email systems, can run on private, hybrid and public clouds
powered by vSphere.
References (www.vmware.com)Understanding Memory Resource Management in VMware vSphere 5
Managing Performance Variance of Applications Using Storage I/O Control in VMware vSphere 5
Performance Best Practices Guide for VMware vSphere 5
Zimbra Mail Performance on VMware vSphere 5
Host Power Management in VMware vSphere 5
http://www.vmware.com/http://www.vmware.com/