i/o virtualization: enabling new architectures in the …...2012 storage developer conference. ©...

15
I/O Virtualization: Enabling New Architectures in the Data Center for Server and I/O Connectivity Sujoy Sen Micron

Upload: others

Post on 06-Apr-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

I/O Virtualization: Enabling New Architectures in the Data Center for

Server and I/O Connectivity

Sujoy Sen Micron

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

Server Evolution

2

1970s-80s (mainframe)

1990s

2000s 2010s

Performance Very Powerful Powerful in numbers Powerful Very Powerful

Cost Expensive Cheap Cheap Cheap

Standard? Proprietary Standardized Standardized Standardized

CPU Virtualized Physical/ Fixed Virtualized Virtualized

I/O Virtualized/ External

Physical/ Fixed

Physical/ Fixed

Virtualized/ External

History repeats itself…but improved

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

Agile Computing

March towards “stateless” servers CPU Virtualization -> Compute decoupled

from physical CPUs Storage Virtualization -> Capacity decoupled

from physical servers I/O Virtualization -> I/O decoupled from

physical servers

3

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

Why Decouple I/O?

Increase I/O utilization Independent Scaling of I/O and Compute Speeds and Feeds of CPU and I/O evolution

rates at different cadence Entry point of new I/O technologies into

volume market Operational Agility On-demand access to bandwidth, connectivity

and capacity

4

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

Ways to Decouple I/O?

Physical decoupling Stretch I/O outside servers like “rubber-band” I/O resource treated as discrete elements Allows independent upgrade but minimal

operational or utilization benefits Shared decoupling Still physically decoupled but I/O resources

treated as an aggregate pool Shared I/O leads to virtualized I/O -> “I/O

Virtualization”

5

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

Shared I/O Defined…

Shared I/O is … …about decoupling I/O from CPU …about efficient resource utilization …about abstraction and operational agility

Shared I/O is not … …about fabric convergence …about clustering

6

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

Shared I/O Technologies

Protocol Encapsulation Encapsulate diverse I/O protocols (FC, Ethernet, SCSI etc.) over

a common interconnect provide native I/O interfaces into the OS and use driver/HW to

encapsulate/decapsulate Infiniband and Ethernet are the commonly used interconnects

Native I/O protocols shared directly over PCI Express Takes advantage of PCI-E as the native I/O interconnect of the

CPU All server I/O use PCI-E I/O as the connectivity into the server

(PCI-E 10Gbe NIC, PCI-E FC HBA, PCI-E SSD etc.)

7

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

Ideal Shared I/O Requirements

Open and standards based architecture Compatible with existing server and I/O eco-

system Ideally, zero change in server software

Zero performance overhead Leverage strides in server performance Applicable to all I/O types

8

PCI Express is the natural CPU-I/O interconnect to decouple I/O from servers

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

Deployment in the Data Center

A new class of Appliance in the Data Center TOR or in-blade chassis Stuffed with various I/O shared over

PCI-E to the servers Rack (or Blade) full of space/power

efficient compute engines (CPU and memory)

But, what usage models does this create?

9

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

10

Production Network

Test&Debug Network

IOP1

IOP2

IOP2 IOP1

Production Server

Standby Server Shared I/O Appliance

IO Profile1: function(MAC1, WWPN1, LUN1) IO Profile2: function(MAC2, WWPN2, LUN2)

Virtual Wires Provision/Maintenance

90% BW

10% BW

30% BW

70% BW

Dynamic Resource Allocation Email Application

Data Mining Application

Email Application

Data Mining Application

Daytime Nighttime

Agility in operational management

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

Simplified connectivity and path to higher speed I/O

11

Traditional I/O Shared I/O

Ethernet Switch / NIC 12 / 96 2 / 0

SAN Switch / FC HBA 2 / 16 0 / 0

VIO Appliance 0 2

Ethernet Cable 192 20

SAN Cable 40 8

PCI Express Cable 0 32

Total # of Cables 232 60

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

Local Area Storage – Sharing PCI-E SSDs

12

Networked Storage

(resilient, sharable, managed)

(choose one)

• Data Center storage primarily networked

• Capacity management • Data resilience and availability • Supports server virtualisation • Data sharing

? PCIe SSD

(performance)

Virtualize PCIe SSD (resilient, sharable, managed,

very high performance) ... and a bit more affordable.

• PCIe SSD • Very high throughput (~800K

IOPs!!) • Very low data latency • But very expensive and not

networked!!

OR

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

Local Area Storage: Cost Effective Utilization

13

Source: http://hadoopblog.blogspot.com/ Source: Demartek Report: High Performance MySQL Cluster, April 2012

Max 100K IOPs per Server Max 90K IOPs by HDFS

• Many applications benefit from faster storage... • ... but very few need (can use) >100k sustained IOPS.

• Sharing... • Amortises cost • Provides useful performance at an affordable price • Performance can scale with demand • Provides entry point for new technologies into volume markets • Enables cost effective solutions for bursty usage models

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

Local Area Storage: Usage of shared PCI-E SSD

• Short Term: Server-side “Network” Cache • Coherence: WB vs WT • Guest mobility friendly • Available

• Independent of server failure • Application still has access to cache • Cache warming not needed

• Cache size tunable based on application needs instead of fixed provisioning

• Longer Term: Primary storage for in-rack clusters • Capacity • Snapshots, replication, backup

14

2012 Storage Developer Conference. © Micron Technologies Inc. All Rights Reserved.

Summary

• I/O Decoupling and Sharing is the next step in the Data Center evolution

• I/O sharing leads to • Operational agility • Increased I/O utilization • Scaling independence of I/O and compute

• PCI-E SSDs create a compelling use case for I/O sharing

15