nohype : virtualized cloud infrastructure without the virtualization
DESCRIPTION
NoHype : Virtualized Cloud Infrastructure without the Virtualization. Eric Keller , Jakub Szefer , Jennifer Rexford, Ruby Lee. Princeton University Slides taken from: http://www.cs.princeton.edu/~jrex/talks/isca10.pptx. ISCA 2010. Virtualized Cloud Infrastructure. - PowerPoint PPT PresentationTRANSCRIPT
NoHype: Virtualized Cloud Infrastructure
without the Virtualization
Eric Keller, Jakub Szefer, Jennifer Rexford, Ruby Lee
ISCA 2010
Princeton UniversitySlides taken from:
http://www.cs.princeton.edu/~jrex/talks/isca10.pptx
Virtualized Cloud Infrastructure• Run virtual machines on a hosted infrastructure
• Benefits…– Economies of scale– Dynamically scale (pay for what you use)
3
Without the Virtualization• Virtualization used to share servers
– Software layer running under each virtual machine
Physical Hardware
Hypervisor
OS OS
Apps Apps
Guest VM1 Guest VM2
servers
4
Without the Virtualization• Virtualization used to share servers
– Software layer running under each virtual machine
• Malicious software can run on the same server– Attack hypervisor– Access/Obstruct other VMs
Physical Hardware
Hypervisor
OS OS
Apps Apps
Guest VM1 Guest VM2
servers
5
Are these vulnerabilities imagined?• No headlines… doesn’t mean it’s not real
– Not enticing enough to hackers yet?(small market size, lack of confidential data)
• Virtualization layer huge and growing– 100 Thousand lines of code in hypervisor– 1 Million lines in privileged virtual machine
• Derived from existing operating systems – Which have security holes
6
NoHype• NoHype removes the hypervisor
– There’s nothing to attack– Complete systems solution– Still retains the needs of a virtualized cloud infrastructure
Physical Hardware
OS OS
Apps Apps
Guest VM1 Guest VM2
No hypervisor
7
Virtualization in the Cloud• Why does a cloud infrastructure use virtualization?
– To support dynamically starting/stopping VMs– To allow servers to be shared (multi-tenancy)
• Do not need full power of modern hypervisors– Emulating diverse (potentially older) hardware– Maximizing server consolidation
8
Roles of the Hypervisor• Isolating/Emulating resources
– CPU: Scheduling virtual machines– Memory: Managing memory– I/O: Emulating I/O devices
• Networking• Managing virtual machines
9
Roles of the Hypervisor• Isolating/Emulating resources
– CPU: Scheduling virtual machines– Memory: Managing memory– I/O: Emulating I/O devices
• Networking• Managing virtual machines
Push to HW /Pre-allocation
10
Roles of the Hypervisor• Isolating/Emulating resources
– CPU: Scheduling virtual machines– Memory: Managing memory– I/O: Emulating I/O devices
• Networking• Managing virtual machines
Push to HW /Pre-allocation
Remove
11
Roles of the Hypervisor• Isolating/Emulating resources
– CPU: Scheduling virtual machines– Memory: Managing memory– I/O: Emulating I/O devices
• Networking• Managing virtual machines
Push to HW /Pre-allocation
Remove
Push to side
12
Roles of the Hypervisor• Isolating/Emulating resources
– CPU: Scheduling virtual machines– Memory: Managing memory– I/O: Emulating I/O devices
• Networking• Managing virtual machines
Push to HW /Pre-allocation
Remove
Push to side
NoHype has a double meaning… “no hype”
13
Scheduling Virtual Machines• Scheduler called each time hypervisor runs
(periodically, I/O events, etc.)– Chooses what to run next on given core– Balances load across cores
hypervisor
timer
switc
h
I/O
switc
h
timer
switc
h
VMs
time
Today
14
Dedicate a core to a single VM• Ride the multi-core trend
– 1 core on 128-core device is ~0.8% of the processor
• Cloud computing is pay-per-use– During high demand, spawn more VMs– During low demand, kill some VMs– Customer maximizing each VMs work,
which minimizes opportunity for over-subscription
NoHype
15
Managing Memory• Goal: system-wide optimal usage
– i.e., maximize server consolidation
• Hypervisor controls allocation of physical memory0
100
200
300
400
500
600
VM/app 3 (max 400)VM/app 2 (max 300)VM/app 1 (max 400)
Today
16
Pre-allocate Memory• In cloud computing: charged per unit
– e.g., VM with 2GB memory
• Pre-allocate a fixed amount of memory– Memory is fixed and guaranteed– Guest VM manages its own physical memory
(deciding what pages to swap to disk)
• Processor support for enforcing:– allocation and bus utilization
NoHype
17
Emulate I/O Devices• Guest sees virtual devices
– Access to a device’s memory range traps to hypervisor– Hypervisor handles interrupts– Privileged VM emulates devices and performs I/O
Physical Hardware
Hypervisor
OS OS
Apps Apps
Guest VM1 Guest VM2
RealDrivers
Priv. VMDevice
Emulation
traptraphypercall
Today
18
• Guest sees virtual devices– Access to a device’s memory range traps to hypervisor– Hypervisor handles interrupts– Privileged VM emulates devices and performs I/O
Emulate I/O Devices
Physical Hardware
Hypervisor
OS OS
Apps Apps
Guest VM1 Guest VM2
RealDrivers
Priv. VMDevice
Emulation
traptraphypercall
Today
19
Dedicate Devices to a VM• In cloud computing, only networking and storage• Static memory partitioning for enforcing access
– Processor (for to device), IOMMU (for from device)
Physical Hardware
OS OS
Apps Apps
Guest VM1 Guest VM2
NoHype
20
Virtualize the Devices• Per-VM physical device doesn’t scale• Multiple queues on device
– Multiple memory ranges mapping to different queues
Processor Chipset
MemoryC
lass
ifyM
UX M
AC
/PH
Y
Network Card
Peripheralbus
NoHype
PCI-SIG StandardSR-IOV (Single Root-IO Virt.)
21
• Ethernet switches connect servers
Networking
server server
Today
22
• Software Ethernet switches connect VMs
Networking (in virtualized server)
Virtual server Virtual server
Software Virtual switch
Today
23
• Software Ethernet switches connect VMs
Networking (in virtualized server)
OS
Apps
Guest VM1
Hypervisor
OS
Apps
Guest VM2
hypervisor
Today
24
• Software Ethernet switches connect VMs
Networking (in virtualized server)
OS
Apps
Guest VM1
Hypervisor
OS
Apps
Guest VM2
SoftwareSwitch
Priv. VM
Today
25
Do Networking in the Network• Co-located VMs communicate through software
– Performance penalty for not co-located VMs– Special case in cloud computing– Artifact of going through hypervisor anyway
• Instead: utilize hardware switches in the network– Modification to support hairpin turnaround
NoHype
26
Managing Virtual Machines• Allowing a customer to start and stop VMs
Wide Area Network
Request: Start VM
Cloud Customer
CloudProvider
Today
27
Managing Virtual Machines• Allowing a customer to start and stop VMs
Wide Area Network
Servers
Request: Start VM
Cloud Customer
CloudProvider
.
.
.
VM images
Cloud Manager
Request: Start VM
Today
28
Hypervisor’s Role in Management• Run as application in privileged VM
Physical Hardware
Hypervisor
Priv. VM
VM Mgmt.
Today
29
Hypervisor’s Role in Management• Receive request from cloud manager
Physical Hardware
Hypervisor
Priv. VM
VM Mgmt.
Today
30
Hypervisor’s Role in Management• Form request to hypervisor
Physical Hardware
Hypervisor
Priv. VM
VM Mgmt.
Today
31
Hypervisor’s Role in Management• Launch VM
Physical Hardware
Hypervisor
Priv. VM
VM Mgmt.
OS
Apps
Guest VM1
Today
32
Decouple Management And Operation• System manager runs on its own core
Core 0
SystemManager
Core 1
NoHype
33
Decouple Management And Operation• System manager runs on its own core• Sends an IPI to start/stop a VM
Core 0
SystemManager
Core 1
IPI
NoHype
34
Decouple Management And Operation• System manager runs on its own core• Sends an IPI to start/stop a VM• Core manager sets up core, launches VM
– Not run again until VM is killed
Core 0
SystemManager
Core 1
CoreManager OS
Apps
Guest VM2
IPI
NoHype
35
Removing the Hypervisor Summary• Scheduling virtual machines
– One VM per core
• Managing memory– Pre-allocate memory with processor support
• Emulating I/O devices– Direct access to virtualized devices
• Networking– Utilize hardware Ethernet switches
• Managing virtual machines– Decouple the management from operation
36
Security Benefits• Confidentiality/Integrity of data• Availability• Side channels
37
Security Benefits• Confidentiality/Integrity of data• Availability• Side channels
38
Confidentiality/Integrity of DataRequires access to the data
• System manager can alter memory access rules– But, guest VMs do not interact with the system manager
With hypervisor NoHypeRegisters upon VM exit No schedulingPackets sent through software switch
No software switch
Memory accessible by hypervisor
No hypervisor
39
NoHype Double Meaning• Means no hypervisor, also means “no hype”
• Multi-core processors– Available now
• Extended (Nested) Page Tables– Available now
• SR-IOV and Directed I/O (VT-d)– Network cards now, Storage devices near future
• Virtual Ethernet Port Aggregator (VEPA)– Next-generation switches
40
Conclusions and Future Work• Trend towards hosted and shared infrastructures• Significant security issue threatens adoption• NoHype solves this by removing the hypervisor• Performance improvement is a side benefit
• Future work:– Implement on current hardware– Assess needs for future processors
41
Questions?
Contact info:
http://www.princeton.edu/~ekeller
http://www.princeton.edu/~szefer