vmug hyper v overview
DESCRIPTION
TRANSCRIPT
Hyper-V Overview & Update
Andrew FryerEvangelist Microsoft UK
What We Will Cover
General Hypervisor PerformanceHyper-V Performance Some Best PracticesLinks to Reference Material
Trends – Changing Market Landscape
Virtualization is exploding resulting in VM proliferation and impacting OS share
Licensed Windows
61%Unpaid Windows
11%
Linux 21%
Unix6%
Other1%
Y2005 Y2006 Y2007 Y2008 Y2009 Y2010 Y2011 Y20120
2,000,000
4,000,000
6,000,000
8,000,000
10,000,000
12,000,000
14,000,000
Physical Units Logical Units
Number of physical servers shipments used for virtualization will grow to 1.7M+ in 2012 at a CAGR of 15%
19% of physical server shipments will be used for virtualization, increasing from 11.7% in 2007
IDC Server Virtualization Forecast9.00
8.00
7.00
6.00
5.00
4.00
3.00
2.00
1.00
VM Density
Dynamic Memory
A memory management enhancement for Hyper-VEnables customers to dynamically grow and decrease the memory of a VMAvailable as a feature in Windows Server 2008 R2 SP1
How it works?
VM memory configuration parameters:Initial (what VM will boot with)Maximum (what VM can grow to)
Memory is pooled and dynamically distributed across VMsMemory is dynamically allocated/removed based VM usage with no service interruptionGuest enlightened: guests & Hyper-V work TOGETHERMemory is added and removed via synthetic memory driver (memory VSC) support
Base Hypervisor Performance
Project Virtual Reality Check
Available off www.virtualrealitycheck.netDone by Ruben Spruijt and Jeroen van de KampNot sponsored by any one company, although VMware and Citrix have assisted the siteResults are focused on running and replacing Terminal Server workloads only, on vSphere, Hyper-V, and XenServerResults are not for redistribution or validation, although they are publicPhase II results from February 2010, with significant increase in vSphere performance
Project VRC Results
Enable EPT/RVI results in a significant increase in capacity of VMs running TS
vSphere – 90% increaseXenServer – 95% increaseHyper-V – 154% increase
When scaling x86 TS VMs w/o Hyper-threading, vSphere is 5% better than both Xen and Hyper-VWhen scaling x86 TS VMs w/ Hyper-threading, Xen and Hyper-V are 15% better than vSphere
When scaling up to 100 TS sessions, response times for all three hypervisors are fairly equalBeyond 100 sessions, vSphere response times increases with each new session
When scaling x64 TS VMs, Xen and Hyper-V are within 13.6% of bare metal, and are 27% better than vSphere
2010 ESG Paper3rd Party Performance Validation White Paper, sponsored by Microsoft
Hyper-V is easy to install to get running for those administrators familiar with WindowsClustering is clusteringDisk performance 95% to 99% Workload performance 89% to 98%
http://www.enterprisestrategygroup.com/2010/07/microsoft-hyper-v-r2-scalable-native-server-virtualization-for-the-enterprise/ http://www.infostor.com/index/articles/display/5976242552/articles/infostor/esg-lab-review/2010/july-2010/microsoft-hyper-v.html
10
2010 ESG Lab Highlights: VM Scalability
© 2011 Enterprise Strategy Group
Hyper-V R2 on 16 servers with Microsoft Cluster Shared Volumes (CSV) stored on a single SAN attached disk array supported 1,024 virtual machines
http://www.enterprisestrategygroup.com/2010/11/emc-symmetrix-vmax-and-microsoft-server-virtualization-scalable-enterprise-class-virtual-infrastructure/
0 2 4 6 8 10 12 14 160
128
256
384
512
640
768
896
1,024
Virtual Machine Scalability1 through 16 Microsoft Hyper-V R2 Servers
Clustered Hyper-V R2 Servers
Vrirt
ual
Mac
hine
s
11
2010 ESG Lab Highlights: Mixed Workloads
© 2011 Enterprise Strategy Group
Hyper-V R2 on 2 servers with 16 VMs sharing a single disk array:• 18,750 mailboxes with the Microsoft Exchange 2010 Jetstress utility• and 3,475 small database IOs per second with the Microsoft SQLIO utility• and 650 MB/sec of database throughput with the SQLIO utility• and 3,106 simulated web server IOPs with the Iometer utility• and 413 MB/sec of simulated backup throughput with the Iometer utility• with predictably fast response times and scalability
http://www.enterprisestrategygroup.com/2010/06/ibm-system-storage-ds5020ds3950-express-and-ibm-system-x3950-m2-mixed-workload-performance-in-microsoft-hyper-v-r2-environments/
1 2 3 40
2,000
4,000
6,000
8,000
10,000
12,000
14,000
ExchangeSQL ServerWeb ServerScan/read
Virtual Machines
I/O
's p
er s
econ
d (IO
PS)
2011 ESG Lab Test Bed (Physical)
© 2011 Enterprise Strategy Group
SAN
2x4 Gbps FC per server
HP BL680C up to 24 cores and 128 GB
RAM per blade
EMC CX4-960155 15K RPM FC disk drives
RAID-10 Pools:
Data (88):
Logs(16):
OS(24):
Apps(16):
SharePoint
UtilitiesExchange
Load test
LAN
F5 BIG-IP
13
2011 SharePoint Test Bed (Logical)
© 2011 Enterprise Strategy Group
Hyper-V R2
Application: SharePoint 2010/ SQL 2008 R2VM configuration: 2 vCPU, 4 GB/ 4vCPU, 32 GBMicrosoft Windows Server 2008 R2 SP1
Hypervisor: Microsoft Hyper-V R2Physical OS: Windows Server 2008 R2 SP1
Virtual machine images: Fixed VHD
SQL data and logs: Fixed VHD
SQL Server
SharePoint
Web Server 1
Web Server 2
SAN
Web Server 3
Load generator: Microsoft Visual Studio 2010
© 2011 Enterprise Strategy Group 14
2011 SharePoint Workload
Visual Studio 2010 for SharePoint Load Generation
• Hyper-V stress test with a non-blocking lightweight workload
• 89% browse
• 10% upload
• 1% check in/check out
• 22 GB SQL database
• Scale from 1 to 3 web server VMs (5 VMs total)
• Hardware load-balanced web traffic
• Constant workload
© 2011 Enterprise Strategy Group 15
2011 SharePoint Workload Results
SQL
Web Server
SharePoint
0% 20% 40% 60% 80% 100%
Guest CPU Utilization (3 VMs, 1 web server)( SharePoint 2010, Windows 2008 R2 SP1, SQL Server 2008 R2)
vCPU Utilization
CPU bottleneck
© 2011 Enterprise Strategy Group 16
2011 SharePoint Workload Results
SQL
Web Server
Web Server
Web Server
SharePoint
0% 20% 40% 60% 80% 100%
CPU Utilization (5 VMs, 3 web servers)( SharePoint 2010, Windows 2008 R2 SP1, SQL Server 2008 R2)
vCPU Utilization
© 2011 Enterprise Strategy Group 17
2011 SharePoint Workload Results
1 2 30
100,000
200,000
300,000
400,000
500,000
600,000
Hyper-V R2 Application Workload Scalability( SharePoint 2010, Windows 2008 R2 SP1, SQL Server 2008 R2)
Use
rs (l
ight
-wei
ght,
1% c
oncu
rren
t)
3.0
2.5
2.0
1.5
0.5
0
1.0
Ave
rage
Pag
e Re
spon
se T
ime
(sec
)
Web Server VMs: 1 2 3 Total VMs: 3 4 5
18
2011 SharePoint Results Summary
© 2011 Enterprise Strategy Group
• Up to 460,800 simulated SharePoint users*• As expected, the front end is the bottleneck during single web server VM
testing• Adding web server VMs alleviates the bottleneck • Response times improve and more requests per second are delivered as
VMs are added• 90% scaling efficiency from 1 to 2 web servers**
Hyper-V R2 SharePoint Workload Scalability
*1% concurrent users derived from the requests per second measured during the three web server test** Based on a comparison of requests per second divided by average page response time
19© 2011 Enterprise Strategy Group
Why This MattersPerformance scaled and response times dropped as Hyper-V R2 web server VMs were added to a consolidated SharePoint deployment on a single physical server.
The manageably low performance impact of Hyper-V R2 won’t be detected by the vast majority of end-users and applications.
The performance, scalability, and low overhead of Hyper-V R2 can be used to reduce costs and improve the manageability, flexibility, and availability of consolidated SharePoint applications.
The Bigger Truth
© 2011 Enterprise Strategy Group 20
Issues to Consider
• Mileage varies; test with your workloads and your data
• Hyper-V• Included for free with Windows Server 2008• Proven to perform with demanding applications
• Size matters• Application and web server roles are good candidates for virtualization• For larger deployments, consider deploying resource-bound SQL Server and Index
roles on physical servers
• High availability matters
• Leverage ESG Lab Validations, Microsoft and its partners’ best practices/proof points
Microsoft/Intel iSCSI test
• Used Windows Server 2008 R2, Intel Xeon 5500 processors, and Intel 10Gbps Ethernet Adapters
• Reached over One Million IOPS over a single 10 Gbps Ethernet link using a software iSCSI initiator on Native HW
• Reached over 700,000IOPS over a single 10 Gbps Ethernet link using a software iSCSI initiator on Hyper-V to the Guest OS
Microsoft/Intel iSCSI test
Native Performance In-Guest VM Performance
http://gestaltit.com/all/tech/storage/stephen/microsoft-and-intel-push-one-million-iscsi-iops/
Hyper-V & VDI
What IO Bottlenecks Do You Hit First?
In order, generally that isDisk IOMemory pressureProcessor
Disk IO is a performance and density related impactMemory is a density impactProcessor is a performance and density related impact
Processor
# of VMs per core/LP is highly dependent on user scenariosApplication specific usage play a big role
Hyper-V supports1000 VMs per cluster in Clustered scenarios (max of 384 VMs per server)384 VMs per Server in non-Clustered scenarios New! 12 VM’s per Core/Logical Proc
12 VM’s/core is not an architectural limitation but what we have tested and support
SLAT enabled processors provide up to 25% improvement in density
What is Second Level Address Translation (SLAT)? Intel calls it Extended Page Tables (EPT)AMD calls it Nested Page Tables (NPT) or Rapid Virtualization Indexing (RVI)Processor provides two levels of translation
Walks the guest OS page tables directlyNo need to maintain Shadow Page TableNo hypervisor code for demand-fill or flush operations
Resource savingsHypervisor CPU time drops to 2%Roughly 1MB of memory saved per VM
Rule of thumb: If it doesn’t have SLAT don’t buy it
Disk IO
Disk performance is the most critical factor in achieving densityInternal testing showed Windows 7 having lower Disk IO than Windows XP, after boot up
So did ProjectVRC’s recent testing
SAN is of critical importance. Highly recommendedPlenty of cacheConsider de-duplication support especially if persistentDe-duplication allows the benefits of individual images at the cost of differencing diskManaging images on a SAN is way faster and easier than over network (provisioning is faster)We mean real SAN (iSCSI or FC) not NAS across the network…Remember RDS does not require this huge SAN investment…
If you have low complexity requirements:Think about cheaper DAS RAID 0+1 offers better read and write performance than RAID 5Make sure to consider RDS
Rule of thumb: SANs are your new best friends
How Does SP1 Change the GameStatic Memory
Biggest constraint of upper limit VM density (not performance related)Constrained by:
Available memory slots in serversLargest Available DIMMs
Creates an artificial scale ceiling
Buy as much RAM as you expect to scale the number of VM’sPlan for and allocate at least 1GB per Windows 7 VM on Hyper-V RTM
Memory allocation should be determined by upper maximum limit of running appsAllocate enough RAM to prevent the VM paging to disk1GB actually covers a fair amount of app use….
Also refer to: http://www.microsoft.com/whdc/system/sysperf/Perf_tun_srv-R2.mspx
But Dynamic Memory changes all of the above!!
Rule of thumb: More is better
How Does SP1 Change the GameDynamic Memory
Still a constraint of upper limit VM density (not performance related)Buy as much RAM as you expect to scale the number of VM’s
Optimal price/performance curve at 96GB RAM
Plan for and allocate at least 1GB 512MB startup per Windows 7 VM on Hyper-V R2 SP1
Memory allocation will be determined by Dynamic Memory based on running appsActual memory pressure testing in pilot is CRITICALEnsure enough spare capacity to prevent the VM paging to disk
YOU DON’T NEED TO WEAKEN WINDOWS 7 SECURITY TO GET IT TO WORK!
What difference did this make in testing? A lot!
Rule of thumb: Less is more!!!
How Not to Do it…The “Sum of the Parts” Considerations
Everything could be so right…
Powerful Dell bladesDeployed using Citrix Provisioning ServicesPVS delivering from EqualLogic SSD SANvDisk cache per VM, located on Equallogic SAS SAN
So what caused this mess? Roaming Profiles across “slow” file server and network connection
VDI is complicated and requires careful planning and architecture
What Else Affects Density?
Poor storage architecture – primary candidateAnti VirusRoaming Profiles
Look at AppSense Citrix or Quest to help with thisSlow networking to core infrastructure servicesLack of NIC based TCP offloadingPoorly performing drivers
Hyper-V Configuration Guidelines
Hyper-V Root ConfigurationPlan for 1GB+ memory reserve for the management OS in the root partitionPlan for one dedicated NIC for management purposesPlan (ideally) for one dedicated NIC for live migrationSeparate LUNs/Arrays for management OS, guest OS VHDs and VM storageManagement OS and VHD LUNs should employ RAID to provide data protection and performanceChallenge for blades with 2 physical disks
Hyper-V Guest ConfigurationFixed-sized VHDs for Virtual OS
Need to account for page file consumption in addition to OS requirementsOS VHD Size (minimum 15GB) + VM Memory Size = Minimum VHD size
Account for space needed by additional files by VMExample for SQL: OS VHD Size + (VM Memory Size) + Data Files + Log Files
Windows 8 Server
Go to http://www.buildwindows.com
Conclusion
Do a POC Hyper-V is a part of Windows Server 2008 R2You’ll be pleasantly surprised how you don’t have to add custom settings on Hyper-V to get great performanceYou’ll also love how efficient the disk IO is!
Next steps to check outhttp://www.citrix.com/xendesktop http://www.microsoft.com/hyperv http://www.quest.com/vworkspace
Upcoming events
IPExpo 20-21 October Olympiahttp://www.ipexpo.co.uk/ TechDays Online 27th October – on your PChttps://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032493495&Culture=en-GB
Microsoft Virtual Academyhttp://www.microsoftvirtualacademy.com