nohype: virtualized cloud infrastructure without the virtualization eric keller, jakub szefer,...

33
NoHype: Virtualized Cloud Infrastructure without the Virtualization Eric Keller, Jakub Szefer, Jennifer Rexford, Ruby Lee IBM Cloud Computing Student Workshop (ISCA 2010 + Ongoing work) Princeton University

Upload: caroline-mills

Post on 18-Dec-2015

242 views

Category:

Documents


0 download

TRANSCRIPT

NoHype: Virtualized Cloud Infrastructure

without the Virtualization

Eric Keller, Jakub Szefer, Jennifer Rexford, Ruby Lee

IBM Cloud Computing Student Workshop

(ISCA 2010 + Ongoing work)

Princeton University

Virtualized Cloud Infrastructure

• Run virtual machines on a hosted infrastructure

• Benefits…– Economies of scale– Dynamically scale (pay for what you use)

3

Without the Virtualization

• Virtualization used to share servers– Software layer running under each virtual machine

Physical Hardware

Hypervisor

OS OS

Apps Apps

Guest VM1 Guest VM2

servers

4

Without the Virtualization

• Virtualization used to share servers– Software layer running under each virtual machine

• Malicious software can run on the same server– Attack hypervisor– Access/Obstruct other VMs

Physical Hardware

Hypervisor

OS OS

Apps Apps

Guest VM1 Guest VM2

servers

5

Are these vulnerabilities imagined?

• No headlines… doesn’t mean it’s not real– Not enticing enough to hackers yet?

(small market size, lack of confidential data)

• Virtualization layer huge and growing– 100 Thousand lines of code in hypervisor– 1 Million lines in privileged virtual machine

• Derived from existing operating systems – Which have security holes

6

NoHype

• NoHype removes the hypervisor– There’s nothing to attack– Complete systems solution– Still retains the needs of a virtualized cloud infrastructure

Physical Hardware

OS OS

Apps Apps

Guest VM1 Guest VM2

No hypervisor

7

Virtualization in the Cloud

• Why does a cloud infrastructure use virtualization?– To support dynamically starting/stopping VMs– To allow servers to be shared (multi-tenancy)

• Do not need full power of modern hypervisors– Emulating diverse (potentially older) hardware– Maximizing server consolidation

8

Roles of the Hypervisor

• Isolating/Emulating resources– CPU: Scheduling virtual machines– Memory: Managing memory– I/O: Emulating I/O devices

• Networking

• Managing virtual machines

Push to HW /Pre-allocation

Remove

Push to side

NoHype has a double meaning… “no hype”

9

Scheduling Virtual Machines

• Scheduler called each time hypervisor runs(periodically, I/O events, etc.)– Chooses what to run next on given core– Balances load across cores

hypervisor

timer

switc

h

I/O

switc

h

timer

switc

h

VMs

time

Today

10

Dedicate a core to a single VM

• Ride the multi-core trend– 1 core on 128-core device is ~0.8% of the processor

• Cloud computing is pay-per-use– During high demand, spawn more VMs– During low demand, kill some VMs– Customer maximizing each VMs work,

which minimizes opportunity for over-subscription

NoHype

11

Managing Memory

• Goal: system-wide optimal usage– i.e., maximize server consolidation

• Hypervisor controls allocation of physical memory0

100

200

300

400

500

600

VM/app 3 (max 400)VM/app 2 (max 300)VM/app 1 (max 400)

Today

12

Pre-allocate Memory

• In cloud computing: charged per unit– e.g., VM with 2GB memory

• Pre-allocate a fixed amount of memory– Memory is fixed and guaranteed– Guest VM manages its own physical memory

(deciding what pages to swap to disk)

• Processor support for enforcing:– allocation and bus utilization

NoHype

13

Emulate I/O Devices

• Guest sees virtual devices– Access to a device’s memory range traps to hypervisor– Hypervisor handles interrupts– Privileged VM emulates devices and performs I/O

Physical Hardware

Hypervisor

OS OS

Apps Apps

Guest VM1 Guest VM2

RealDrivers

Priv. VM

DeviceEmulation

traptraphypercall

Today

14

• Guest sees virtual devices– Access to a device’s memory range traps to hypervisor– Hypervisor handles interrupts– Privileged VM emulates devices and performs I/O

Emulate I/O Devices

Physical Hardware

Hypervisor

OS OS

Apps Apps

Guest VM1 Guest VM2

RealDrivers

Priv. VM

DeviceEmulation

traptraphypercall

Today

15

Dedicate Devices to a VM

• In cloud computing, only networking and storage

• Static memory partitioning for enforcing access– Processor (for to device), IOMMU (for from device)

Physical Hardware

OS OS

Apps Apps

Guest VM1 Guest VM2

NoHype

16

Virtualize the Devices

• Per-VM physical device doesn’t scale

• Multiple queues on device– Multiple memory ranges mapping to different queues

Processor Chipset

MemoryC

lass

ify

MU

X MA

C/P

HY

Network Card

Peripheralbus

NoHype

17

• Ethernet switches connect servers

Networking

server server

Today

18

• Software Ethernet switches connect VMs

Networking (in virtualized server)

Virtual server Virtual server

Software Virtual switch

Today

19

• Software Ethernet switches connect VMs

Networking (in virtualized server)

OS

Apps

Guest VM1

Hypervisor

OS

Apps

Guest VM2

hypervisor

Today

20

• Software Ethernet switches connect VMs

Networking (in virtualized server)

OS

Apps

Guest VM1

Hypervisor

OS

Apps

Guest VM2

SoftwareSwitch

Priv. VM

Today

21

Do Networking in the Network

• Co-located VMs communicate through software– Performance penalty for not co-located VMs– Special case in cloud computing– Artifact of going through hypervisor anyway

• Instead: utilize hardware switches in the network– Modification to support hairpin turnaround

NoHype

22

Removing the Hypervisor Summary

• Scheduling virtual machines– One VM per core

• Managing memory– Pre-allocate memory with processor support

• Emulating I/O devices– Direct access to virtualized devices

• Networking– Utilize hardware Ethernet switches

• Managing virtual machines– Decouple the management from operation

23

NoHype Double Meaning

• Means no hypervisor, also means “no hype”

• Multi-core processors

• Extended Page Tables

• SR-IOV and Directed I/O (VT-d)

• Virtual Ethernet Port Aggregator (VEPA)

24

NoHype Double Meaning

• Means no hypervisor, also means “no hype”

• Multi-core processors

• Extended Page Tables

• SR-IOV and Directed I/O (VT-d)

• Virtual Ethernet Port Aggregator (VEPA)

Current Work: Implement it on today’s HW

25

Xen as a Starting Point

• Management tools

• Pre-allocate resources – i.e., configure virtualized hardware

• Launch VM

Xen

Guest VM1Priv. VM

xm

Pre fill EPT mapping to partition memorycore core

26

Network Boot

• gPXE in Hvmloader – Added support for igbvf (Intel 82576)

• Allows us to remove disk– Which are not virtualized yet

Xen

Guest VM1Priv. VM

xm

core core

hvmloader

DHCP/gPXE

servers

27

Allow Legacy Bootup Functionality

• Known good kernel + initrd (our code)– PCI reads return “no device” except for NIC– HPET reads to determine clock freq.

Xen

Guest VM1Priv. VM

xm

core core

kernel

DHCPgPXE

servers

28

Use Device Level Virtualization

• Pass through Virtualized NIC

• Pass through Local APIC (for timer)

Xen

Guest VM1Priv. VM

xm

core core

kernel

DHCPgPXE

servers

29

Block All Hypervisor Access

• Mount iSCSI drive for user disk

• Before jumping to user code, switch off hypervisor– Any VM Exit causes a Kill VM– User can load kernel modules, any applications

Xen

Guest VM1Priv. VM

xm

core core

kernel

DHCPgPXE

servers

Kill VMiSCSI

servers

30

Timeline

time

hvmloaderSet up Kernel(device disc.)

Customer code

GuestVMspace

VMX Root

31

Next Steps

• Assess needs for future processors

• Assess OS modifications– to eliminate need for golden image

(e.g., push configuration instead of discovery)

32

Conclusions

• Trend towards hosted and shared infrastructures

• Significant security issue threatens adoption

• NoHype solves this by removing the hypervisor

• Performance improvement is a side benefit

33

Questions?

Contact info:

[email protected]

http://www.princeton.edu/~ekeller

[email protected]

http://www.princeton.edu/~szefer