mi-ross reliable object storage system for software defined storage and cloud
TRANSCRIPT
2/23/16
1
Reliable Object Storage SystemFor Software Defined Storage and Cloud
Mi-ROSS
LukeJingYuanMohd Bazli AbKarimWongMingTatStorageSystemsAdvanced Computing Lab
Agenda• Motivations
• Ceph?
• Why Ceph as Backend? Some Use Cases
• Some Little Annoyances
• What’s Mi-ROSS
• Features
• Demo???
2/23/16
2
Agenda• Motivations
• Ceph?
• Why Ceph as Backend? Some Use Cases
• Some Little Annoyances
• What’s Mi-ROSS
• Features
• Demo???
Motivations
The Hype Cycle forStorage Technologies 2014
2/23/16
3
Motivations (cont’d)
AverageCostper100TB(raw)? ~RM10M++
Morefeaturesrequired? RMxk <+y <RMxxxkMorespacerequired? Mostlikelyonlyfromsamevendor
StorageNetwork(SAN)? RMxxk <+z <RMxxxk
Consider thenormal“branded” way Goingcommodityandopenplatform
+OpenSourceStoragePlatform/Software
2Ux86based12x4TB(raw)~RM50k/unit
1/10GbpsEthernetSwitch<RM100k/unit
Morespacerequired?Justgetanyx86boxanddisksFeatures?Mostlyalreadyavailableintheopensourcestoragesoftware
=144TB(raw)@<RM350k
CanwegoCommodity andOpenPlatform?
Aftersomestudies…Ceph ischosen
Agenda• Motivations
• Ceph?
• Why Ceph as Backend? Some Use Cases
• Some Little Annoyances
• What’s Mi-ROSS
• Features
• Demo???
2/23/16
4
Ceph?• Its an open source scalable and reliable
distributed object storage• Stores data by stripping/chunking those data into
smaller objects and store the objects across different storage elements (disks)
• Objects can be replicated multiple times for redundancy or using Erasure Code techniques if storage capacity is desired
• Clients access the distributed storage via RADOS Block Device (RBD), CephFS, RADOS Gateway (RGW), Ceph/RADOS libraries.
• KVM (Qemu-KVM), Libvirt and OpenNebula have Ceph support
Ceph? (cont’d)
APP HOST/VM CLIENT
Source:PatrickMcGarry,Inktank
2/23/16
5
Agenda• Motivations
• Ceph?
• Why Ceph as Backend? Some Use Cases
• Some Little Annoyances
• What’s Mi-ROSS
• Features
• Demo???
Why Ceph as Backend? A Use Case• Let’s consider a typical DR deployment scenario:
DataCenter
DisasterRecoverySite(s)
SAN/NAS$$$
SAN/NAS$$$R/W
2/23/16
6
• What if?
11
Use Case (cont’d)
DataCenter
SAN/NAS$$$
DisasterRecoverySite(s)
SAN/NAS$$$
DataCenter2
Local/DAS $
DataCenter1
Local/DAS $
Mi-ROSS[One/Multiple Virtual
Volume(s)]
R/W
R e p l i c a t i ons
D a t a S t r i p i ng a n d P a ra l le l R / W
Software-Defined Networking
F Programmable
F Redundancy
F Availability/Reliability
F Performance
Use Case #2
• Initial POC simulates both KHTP and TPM with 3 zones using existing but slightly different hardware configuration (different disk specs).
• For actual implementation, replicate POC setup but with similar hardware configurations.– KHTP zones setup is all in a
single data center – TPM zones setup uses both
DC1 and DC2
12
3
MIMOS TPMHPCC2 HPCC1
32
1
MIMOS KHTP
2/23/16
7
Use Case #3: VDI
• Due to project requirement, we needed a controlled environment where users remotely access a windows desktop/client for development
• Solution – Open Nebula + Ceph (Emperor)– 60+ Windows 7 VMs
with RDP– 30+ development VMs– Additional attached
storage
Agenda• Motivations
• Ceph?
• Why Ceph as Backend? Some Use Cases
• Some Little Annoyances
• What’s Mi-ROSS
• Features
• Demo???
2/23/16
8
Some Little Annoyances• Command line driven management• How to ease management of pools and other
capabilities?• What if I need to access the storage differently?
– NFS– SAMBA– Etc.
• Is there a way to orchestrate or provide a management interface to other cloud management platform, e.g. OpenNebula, etc.– Register pools– Configure Libvirt– Etc.
Agenda• Motivations
• Ceph?
• Why Ceph as Backend? Some Use Cases
• Some Little Annoyances
• What’s Mi-ROSS
• Features
• Demo???
2/23/16
9
What’s Mi-ROSS
• To provide a simple access and management to a distributed storage that can be both implemented in a LAN as well as in a WAN/Campus network.
• Leveraging on the strength of the availability and redundancy provided by its chosen backend, i.e. Ceph.
• Mi-ROSS – MIMOS Reliable Object Storage System, is an initiative in Software-Defined Storage.
Agenda• Motivations
• Ceph?
• Why Ceph as Backend? Some Use Cases
• Some Little Annoyances
• What’s Mi-ROSS
• Features
• Demo???
2/23/16
10
Mi-ROSS Dashboard/Simple Monitoring
Mi-ROSS Pools & Block Devices Management
2/23/16
11
Mi-ROSS NFS Management
Mi-ROSS Samba Management
2/23/16
12
Agenda• Motivations
• Ceph?
• Why Ceph as Backend? Some Use Cases
• Some Little Annoyances
• What’s Mi-ROSS
• Features
• Demo???
Disclaimer: We are running from a Production Environment
DEMO
2/23/16
13
What’ Next?
• Additional management features• Hierarchical Storage and Data
Management• New export option(s) (e.g. iSCSI, etc.)• Web Services• Better integration to OpenNebula/Mi-Cloud