open vstorage road show 2015 q1

31
Wim Provoost (@wimpers_be) Open vStorage (@openvstorage) http://www.openvstorage.com A product by CloudFounders

Upload: wimprovoost

Post on 16-Jul-2015

1.025 views

Category:

Technology


4 download

TRANSCRIPT

Wim Provoost (@wimpers_be)Open vStorage (@openvstorage)

http://www.openvstorage.com

A product by CloudFounders

CloudFounders

vRUNConverged infrastructure that combines the benefits of the

hyperconverged approach yet offers independent compute

and storage scaling.

Open vStorage Core Storage Technology

FlexCloudA hosted private cloud based

on the vRun technology available at multiple data

centers world-wide.

A product by CloudFounders

2 Types of StorageBlock Storage:•EMC, Netapp, ...•Virtual Machines•High perfomance, low latency •Small capacity, typically fixed size•Expensive•Zero-copy snapshots, linked clones•$/IOPS

Object Storage:•Swift, Cleversafe, ...•Unstructured data•Low performance, high latency•Large capacity, scalable •Inexpensive, commodity hardware•No high-end datamanagement features•$/GB

What is needed is a technology which offers Virtual Machines the performance and high-end features of a SAN but also the benefits

of the low cost and scale-out capabilities of object storage!

What is needed is a technology which offers Virtual Machines the performance and high-end features of a SAN but also the benefits

of the low cost and scale-out capabilities of object storage!

What is Open vStorage

Open vStorage is an open-source software solution which creates a VM-centric, reliable, scale out and high performance storage layer for OpenStack Virtual Machines on top of

object storage or a pool of Kinetic drives.

Open vStorage Feature Set

HyperFast

Scalable

Reliable

VM-Centric

Open-source

Low TCO

The architecture

OpenStack

Scale-outVM VM

VM VM

SSDSSDSSDSSD

OpenvStorageOpenvStorage

OpenStack

VM VM

VM VM

SSDSSDSSDSSD

OpenvStorageOpenvStorage

OpenStack

VM VM

VM VM

SSDSSDSSDSSD

OpenvStorageOpenvStorage

Unified Namespace

S3 compatible Object Storageor a pool of Kinectic Drives

S3 compatible Object Storageor a pool of Kinectic Drives

Tier 1 - Location Based•Read/Write cache on SSD•Block based storage•Thin provisioning•VM Centric•Distributed Transaction Log

Tier 1 - Location Based•Read/Write cache on SSD•Block based storage•Thin provisioning•VM Centric•Distributed Transaction Log

Tier 2 -Time Based •Zero Copy Snapshot•Zero Copy Cloning•Continuous data protection•Redundant storage•Scale-out

Tier 2 -Time Based •Zero Copy Snapshot•Zero Copy Cloning•Continuous data protection•Redundant storage•Scale-out

Optimized storage architecture

Powered byMemory & SSD

Deduplicated Read Cache:more effective use of Tier 1 storage.

Zero copy cloningwith linked clones

Thin ProvisioningOffload storage maintenance

tasks to the Tier 2

Unlimited scalability

Grow storage performance by adding more SSDs

Grow storage capacity by adding more disks

Asymmetric scalabilityof CPU and storage

No bottlenecksNo dual controllers

Hyper Reliability

More reliable than raid5Supports Live MigrationZero-shared architecture

Synchronized DistributedTransaction Log

Unlimited snapshots, longer retentions

Changes in Open vStorage 2 (End Q1 2015)

• Improved performance (x3) by tight integration with QEMU– 50-70k iops per host– Removes the NFS & FUSE performance loss

• Improved hardening against failure– Seamless volume migration (no metadata rebuild)– Limited impact of SSD failure

• Support for Seagate Kinetic drives as storage backend– Encryption, compression, forward error correction– Manage a pool of Kinetic drives as Tier2 storage

• Focus on OpenStack/KVM

Deduplicated Clustered Tier One (A pool of Flash)

Futher down the road ...

• Distributed Clustered Tier One– Uses SSDs across the env. as 1 big shared, deduplicated Tier 1 read cache.– Speed comparable with an All-Flash array: almost all VM I/O will be from flash.– Scale storage performance by adding more SSDs.– Limits impact of an SSD failure. Hot cache in case of Live Migration.

OpenStack

VM VM

VM VM

OpenStack

VM VM

VM VM

OpenStack

VM VM

VM VM

SSDSSDSSDSSD SSDSSD

SSDSSDSSDSSDSSDSSD SSDSSD

SSDSSDhashhash 4k block4k block

hashhash 4k block4k block

hashhash 4k block4k block

Scale-out

How does OpenStack and Open vStorage play along

2 Types of StorageBlock Storage:•EMC, Netapp, ...•Virtual Machines•High perfomance, low latency •Small capacity, typically fixed size•Expensive•Zero-copy snapshots, linked clones•$/IOPS

Object Storage:•Swift, Cleversafe, ...•Unstructured data•Low performance, high latency•Large capacity, scalable •Inexpensive, commodity hardware•No high-end datamanagement features•$/GB

What is needed is a technology which offers Virtual Machines the performance and high-end features of a SAN but also the benefits

of the low cost and scale-out capabilities of object storage!

What is needed is a technology which offers Virtual Machines the performance and high-end features of a SAN but also the benefits

of the low cost and scale-out capabilities of object storage!

OpenStack Swift: some highlights

• Designed to store unstructered data in a cost-effictive way– Use low cost, large capacity SATA disks– Increase capacity by adding more disk/servers when needed– Increase performance by adding spindles/proxies

• High reliability by distributing content across disks– 3 way replication– Erasure coding (on the roadmap)

• Easy to manage (no knowledge needed about RAID or volumes)

ProxyProxy ProxyProxy

StorageNode

StorageNode

StorageNode

StorageNode

StorageNode

StorageNode

Cinder: some highlights

• Cinder provides an infrastructure/API for managing volumes on OpenStack. – Volume create, delete, list, show, attach, detach, extend– Snapshot create, delete, list, show – Backups create, restore, delete, list, show – Manage volume types, quotas– Migration

• By default Cinder uses local disks but plugins allow additional storage solutions to be used:– External appliances: EMC, Netapp, SolidFire– Software solutions: GlusterFS, Ceph, …

Cinder with local disks has some problems ...

I S C S I

Nova

Cinder

ManagementNightmare!

A traditional OpenStack setup

NovaInstance

Management

NovaInstance

Management

SwiftObject Storage

SwiftObject Storage

CinderBlock Storage

CinderBlock Storage

GlanceImage store

GlanceImage store

VMVM

Provides volume for

Provisions

Stores image in

Stores backups in

Provides image for

SAN, NAS, ...SAN,

NAS, ...

Provides disk space

2 storage platforms?! 2 storage platforms?!

“Swift under Cinder”?

• Eventual consistency (the CAP Theorem)

• Latency & performance– VMs require low latency and high performance– Object stores are developed to contain lots of data

(large disks, low performance)– Additional latency as Object Store is on the Local LAN instead of attached to the host like DAS

• Different Management Paradigms– Object Stores understand Objects <> Hypervisors understand blocks, files

Open vStorage & OpenStack

NovaInstance

Management

NovaInstance

Management

SwiftObject Storage

SwiftObject Storage

CinderBlock Storage

CinderBlock Storage

GlanceImage store

GlanceImage store

VMVM

Provides volume for

Provisions

Stores image in

Stores backups in

Provides image for

Provides disk space Open vStorage Open vStorageConverts Object Storage

into Block Storage

Get Started with Open vStorage

Get the software

• Open vStorage as open-source software is released under the Apache License, Version 2.0– Free to use even in commercial products– Open and free community help-forum : https://groups.google.com/forum/?hl=en#!

forum/open-vstorage– You can contribute: https://bitbucket.org/openvstorage/

• Actively building a community– Port of Open vStorage to CentOS– Bug reporting and fixing– Provide POC for new features– ...

• To be released end Q1 2015– Storage appliance + Open vStorage storage software– Start package: 48TB storage + license for Open vStorage

(No restriction on amount of RAM, CPU and SSDs)– Supported Open vStorage version– Monitoring, support and maintenance included– Low cost pricing– ...

Open vStorage Based Storage Solution

• To be released end Q1 2015– Converged OpenStack Solution based on Kinetic drives

– Starter package: 4 compute nodes, 48TB storage

– Supported Open vStorage version

– Supported OpenStack version

– Monitoring, support and maintenance included

– Low cost: • 50% lower than EVO:RAIL

• 50% lower than Nutanix

Open vStorage Based Converged Solution

Summary

Open vStorage Summary

• +50.000 IOPS per Hypervisor• Made for OpenStack Virtual Machines• Unified Namespace• Ultra Reliable• Unlimited Snapshots• Endless scalability for both capacity and storage

performance• Lowest Management Cost In Market

S3 compatibleObject Based

Storage

S3 compatibleObject Based

Storage

Hypervisor

Open v StorageOpen v Storage

Hypervisor

Open v StorageOpen v Storage

Hypervisor

Open v StorageOpen v Storage

Hypervisor

Open vStorageOpen vStorage

Pool of Kinetic Drives

Pool of Kinetic Drives

Technical Slides

Solving Eventual Consistency using time based approach

SSD or PCI FlashLBA 1: LBA 1: 4k block 1 4k block 1

LBA 2: LBA 2: 4k block 24k block 2

LBA 3: LBA 3: 4k block 34k block 3

LBA 4: LBA 4: 4k block 44k block 4

LBA 5: LBA 5: 4k block 54k block 5

LBA 1: LBA 1: 4k block 64k block 6

LBA 1: LBA 1: 4k block 74k block 7

LBA 3: LBA 3: 4k block 84k block 8

LBA 6: LBA 6: 4k block 94k block 9

LBA 7: LBA 7: 4k block 104k block 10

LBA 8: LBA 8: 4k block 11 4k block 11

LBA 2: LBA 2: 4k block 124k block 12

LBA 9: LBA 9: 4k block 134k block 13

LBA 10: LBA 10: 4k block 144k block 14

New writes

SCO 1

4k block 1 4k block 1

4k block 24k block 2

4k block 34k block 3

4k block 44k block 4

4k block 54k block 5

4k block 64k block 6

4k block 74k block 7

SCO 2

4k block 8 4k block 8

4k block 94k block 9

4k block 104k block 10

4k block 114k block 11

4k block 124k block 12

4k block 134k block 13

4k block 144k block 14

LBA 5: LBA 5: 4k block 154k block 15

LBA 10: LBA 10: 4k block 164k block 16

4k block 15 4k block 15

4k block 164k block 16

SCO 3

New writes

SCO1

SCO2 Transfer SCOs once they are full (4MB)to the Storage Backend at slow pace

Transfer SCOs once they are full (4MB)to the Storage Backend at slow pace

Each write is appended

to the current Storage

Container Object (SCO)

Each write is appended

to the current Storage

Container Object (SCO)

Open vStorage <> distributed file system

VSA 1 VSA 2 VSA 3

Arakoon – (config params, metadata, ...)

vDisk1

vDisk1

vDisk2

vDisk2

InternalBucket

vDisk3

vDisk3

VFS2 VFS3

xml

VOLDRV

VM

VOLDRV

Object Router

FILEDRV

FILEDRV

VOLDRV

Object Router

FILEDRV

KVM1 KVM2 KVM3

VFS1

Object Router

Live Motion – In depth (Phase 1)

VSA 1 VSA 2 VSA 3

Arakoon – (config params, metadata, ...)

vDisk1

vDisk1

vDisk2

vDisk2

InternalBucket

vDisk3

vDisk3

VFS3

vmx

VOLDRV

VOLDRV

Object Router

FILEDRV

FILEDRV

VM

VOLDRV

Object Router

FILEDRV

KVM1 KVM2 KVM3

VFS1

VMLive Motion

Object Router

VFS2

Live Motion – In depth (Phase 2)

VSA 1 VSA 2 VSA 3

Arakoon – (config params, metadata, ...)

vDisk1

vDisk1

vDisk2

vDisk2

InternalBucket

vDisk3

vDisk3

VFS2 VFS3

xml

VOLDRV

VOLDRV

FILEDRV

FILEDRV

VM

VOLDRV

Object Router

FILEDRV

KVM1 KVM2 KVM3

VFS1

VMLive Motion

Object Router Handover Object Router

How does Open vStorage solve the problem

• Open vStorage is a middleware layer in between the hypervisor and the object store. (Converts object storage into block storage)– On the host: location based storage (block storage).– On the backend: time based storage (ideal for objects stores).– Open vStorage turns a volume into a single bucket.

• OpenStack Cinder Plugin for easy integration (snapshots, ...).• Distributed file systems don’t work! Open vStorage is not a distributed file sysem!

– All hosts ‘think’ they see the same virtual file systems.– Volume is ‘live’ on 1 host instead of all hosts.– Only the virtual file system metadata is distributed.

• Caching inside the host fixes impedance mismatch between slow, high latency backend and fast, low latency requirement of Virtual Machines.