webinar - getting started with ceph

34
Inktank Delivering the Future of Storage Getting Started with Ceph January 17, 2013

Upload: inktank

Post on 16-Jan-2015

1.920 views

Category:

Technology


3 download

DESCRIPTION

The slides from our first webinar on getting started with Ceph. You can watch the full webinar on demand from http://www.inktank.com/news-events/webinars/. Enjoy!

TRANSCRIPT

Page 1: Webinar - Getting Started With Ceph

Inktank Delivering the Future of Storage

Getting Started with Ceph January 17, 2013

Page 2: Webinar - Getting Started With Ceph

Agenda

•  Inktank and Ceph Introduction

•  Ceph Technology

•  Getting Started Walk-through

•  Resources

•  Next steps

Page 3: Webinar - Getting Started With Ceph

•  Distributed unified object, block and file storage platform

•  Created by storage experts

•  Open source

•  In the Linux Kernel

•  Integrated into Cloud Platforms

•  Company that provides professional services and support for Ceph

•  Founded in 2011

•  Funded by DreamHost

•  Mark Shuttleworth invested $1M

•  Sage Weil, CTO and creator of Ceph

Page 4: Webinar - Getting Started With Ceph

Ceph Technological Foundations

Ceph was built with the following goals:

l  Every component must scale l  There can be no single point of failure l  The solution must be software-based, not an appliance l  Should run on readily-available, commodity hardware l  Everything must self-manage wherever possible l  Must be open source

4

Page 5: Webinar - Getting Started With Ceph

Key Differences •  CRUSH data placement algorithm (Object)

Intelligent storage nodes

•  Unified storage platform (Object + Block + File)

All uses cases (cloud, big data, legacy, web app,

archival, etc.) satisfied in a single cluster

•  Thinly provisioned virtual block device (Block)

Cloud storage block for VM images

•  Distributed scalable metadata servers (CephFS)

Page 6: Webinar - Getting Started With Ceph

Object

•  Archival and backup storage

•  Primary data storage

•  S3-like storage

•  Web services and platforms

•  Application development

Block

•  SAN replacement

•  Virtual block device, VM images

File

•  HPC

•  Posix-compatible applications

Ceph Use Cases

Page 7: Webinar - Getting Started With Ceph

Ceph Technology Overview

Page 8: Webinar - Getting Started With Ceph

Ceph Object Storage (RADOS) A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes

Ceph Object Library (LIBRADOS) A library allowing applications to directly access Ceph Object Storage

Ceph Block (RBD) A reliable and fully-distributed block device

Ceph Distributed File System (CephFS) A POSIX-compliant distributed file system

Ceph Object Gateway (RADOS Gateway) A RESTful gateway for object storage

APP APP HOST/VM CLIENT

Ceph

Page 9: Webinar - Getting Started With Ceph

9

Monitors: • Maintain cluster map • Provide consensus for distributed decision-making

• Must have an odd number • These do not serve stored objects to clients

M

RADOS Storage Nodes containing Object Storage Daemons (OSDs): •  One OSD per disk (recommended) •  At least three nodes in a cluster •  Serve stored objects to clients •  Intelligently peer to perform

replication tasks •  Supports object classes

RADOS Components

Page 10: Webinar - Getting Started With Ceph

10

DISK

FS

DISK DISK

OSD

DISK DISK

OSD OSD OSD OSD

FS FS FS FS btrfs xfs ext4

M M M

RADOS Cluster Makeup

RADOS Cluster

RADOS Node

Page 11: Webinar - Getting Started With Ceph

VOTE Using the Votes Bottom on the top of the presentation panel

please take 30 seconds answer the following questions to help us better understand you.

1.  Are you exploring Ceph for a current project?

2.  Are you looking to implement Ceph within the next 6

months?

3.  Do you need help deploying Ceph?

Page 12: Webinar - Getting Started With Ceph

Getting Started Walk-through

Page 13: Webinar - Getting Started With Ceph

Overview

•  This tutorial and walk-through based on VirtualBox, but other hypervisor platforms will work just as well.

•  Relaxed security best practices to speed things up, and will omit some of the security setup steps here.

•  We will: 1.  Create the VirtualBox VMs 2.  Prepare the VMs for Creating the Ceph Cluster 3.  Install Ceph on all VMs from the Client 4.  Configure Ceph on all the server nodes and the client 5.  Experiment with Ceph’s Virtual Block Device (RBD) 6.  Experiment with the Ceph Distributed Filesystem 7.  Unmount, stop Ceph, and shut down the VMs safely

Page 14: Webinar - Getting Started With Ceph

Create the VMs

•  1 or more CPU cores •  512MB or more memory •  Ubuntu 12.04 with latest updates •  VirtualBox Guest Addons •  Three virtual disks (dynamically allocated):

• 28GB OS disk with boot partition • 8GB disk for Ceph data • 8GB disk for Ceph data

•  Two virtual network interfaces: • eth0 Host-Only interface for Ceph • eth1 NAT interface for updates

Consider creating a template based on the above, and then cloning the template to save time creating all four VMs

Page 15: Webinar - Getting Started With Ceph

Adjust Networking in the VM OS

•  Edit /etc/network/interfaces # The primary network interface auto eth0 iface eth0 inet static address 192.168.56.20 netmask 255.255.255.0 # The secondary NAT interface with outside access auto eth1 iface eth1 inet dhcp gateway 10.0.3.2

•  Edit /etc/udev/rules.d/70-persistent-net.rules If the VMs were cloned from a template, the MAC addresses for the virtual NICs should have been regenerated to stay unique. Edit this file to make sure that the right NIC is mapped as eth0 and eth1.

Page 16: Webinar - Getting Started With Ceph

Security Shortcuts

To streamline and simplify access for this tutorial, we:

•  Configured the user “ubuntu” to SSH between hosts using authorized keys instead of a password.

•  Added “ubuntu” to /etc/sudoers with full access.

•  Configured root on the server nodes to SSH between nodes using authorized keys without a password set.

•  Relaxed SSH checking of known hosts to avoid interactive confirmation when accessing a new host.

•  Disabled cephx authentication for the Ceph cluster

Page 17: Webinar - Getting Started With Ceph

Edit /etc/hosts to resolve names

•  Use the /etc/hosts file for simple name resolution for all the VMs on the Host-Only network.

•  Create a portable /etc/hosts file on the client 127.0.0.1 localhost

192.168.56.20 ceph-client 192.168.56.21 ceph-node1

192.168.56.22 ceph-node2 192.168.56.23 ceph-node3

•  Copy the file to all the VMs so that names are consistently resolved across all machines.

Page 18: Webinar - Getting Started With Ceph

Install the Ceph Bobtail release

ubuntu@ceph-client:~$ wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc | ssh ceph-node1 sudo apt-key add - OK ubuntu@ceph-client:~$ echo “deb http://ceph.com/debian-bobtail/ $(lsb_release -sc) main” | ssh ceph-node1 sudo tee /etc/apt/sources.list.d/ceph.list deb http://ceph.com/debian-bobtail/ precise main ubuntu@ceph-client:~$ ssh ceph-node1 “sudo apt-get update && sudo apt-get install ceph” ... Setting up librados2 (0.56.1-1precise) ... Setting up librbd1 (0.56.1-1precise) ... Setting up ceph-common (0.56.1-1precise) ... Installing new version of config file /etc/bash_completion.d/rbd ... Setting up ceph (0.56.1-1precise) ... Setting up ceph-fs-common (0.56.1-1precise) ... Setting up ceph-fuse (0.56.1-1precise) ... Setting up ceph-mds (0.56.1-1precise) ... Setting up libcephfs1 (0.56.1-1precise) ... ... ldconfig deferred processing now taking place

Page 19: Webinar - Getting Started With Ceph

Create the Ceph Configuration File ~$ sudo cat <<! > /etc/ceph/ceph.conf [global] auth cluster required = none auth service required = none auth client required = none [osd] osd journal size = 1000 filestore xattr use omap = true osd mkfs type = ext4 osd mount options ext4 = user_xattr,rw,

noexec,nodev, noatime,nodiratime

[mon.a] host = ceph-node1 mon addr = 192.168.56.21:6789 [mon.b] host = ceph-node2 mon addr = 192.168.56.22:6789

[mon.c] host = ceph-node3 mon addr = 192.168.56.23:6789 [osd.0] host = ceph-node1 devs = /dev/sdb  [osd.1] host = ceph-node1 devs = /dev/sdc … [osd.5] host = ceph-node3 devs = /dev/sdc [mds.a] host = ceph-node1 !

Page 20: Webinar - Getting Started With Ceph

Complete Ceph Cluster Creation •  Copy the /etc/ceph/ceph.conf file to all nodes •  Create the Ceph deamon working directories:

~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/osd/ceph-0 ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/osd/ceph-1 ~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/osd/ceph-2 ~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/osd/ceph-3 ~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/osd/ceph-4 ~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/osd/ceph-5 ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/mon/ceph-a ~$ ssh ceph-node2 sudo mkdir -p /var/lib/ceph/mon/ceph-b ~$ ssh ceph-node3 sudo mkdir -p /var/lib/ceph/mon/ceph-c ~$ ssh ceph-node1 sudo mkdir -p /var/lib/ceph/mds/ceph-a

•  Run the mkcephfs command from a server node: ~$ ubuntu@ceph-client:~$ ssh ceph-node1 Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-23-generic x86_64) ... ubuntu@ceph-node1:~$ sudo -i root@ceph-node1:~# cd /etc/ceph root@ceph-node1:/etc/ceph# mkcephfs -a -c /etc/ceph/ceph.conf -k ceph.keyring --mkfs

Page 21: Webinar - Getting Started With Ceph

Start the Ceph Cluster On a server node, start the Ceph service:

root@ceph-node1:/etc/ceph# service ceph -a start === mon.a === Starting Ceph mon.a on ceph-node1... starting mon.a rank 0 at 192.168.56.21:6789/0 mon_data /var/lib/ceph/mon/ceph-a fsid 11309f36-9955-413c-9463-efae6c293fd6 === mon.b === === mon.c === === mds.a === Starting Ceph mds.a on ceph-node1... starting mds.a at :/0 === osd.0 === Mounting ext4 on ceph-node1:/var/lib/ceph/osd/ceph-0 Starting Ceph osd.0 on ceph-node1... starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal === osd.1 === === osd.2 === === osd.3 === === osd.4 === === osd.5 ===

Page 22: Webinar - Getting Started With Ceph

Verify Cluster Health root@ceph-node1:/etc/ceph# ceph status health HEALTH_OK monmap e1: 3 mons at {a=192.168.56.21:6789/0,b=192.168.56.22:6789/0,c=192.168.56.23:6789/0}, election epoch 6, quorum 0,1,2 a,b,c osdmap e17: 6 osds: 6 up, 6 in

pgmap v473: 1344 pgs: 1344 active+clean; 8730 bytes data, 7525 MB used, 39015 MB / 48997 MB avail mdsmap e9: 1/1/1 up {0=a=up:active} root@ceph-node1:/etc/ceph# ceph osd tree # id weight type name up/down reweight -1 6 root default -3 6 rack unknownrack -2 2 host ceph-node1 0 1 osd.0 up 1 1 1 osd.1 up 1 -4 2 host ceph-node2 2 1 osd.2 up 1 3 1 osd.3 up 1 -5 2 host ceph-node3 4 1 osd.4 up 1 5 1 osd.5 up 1

Page 23: Webinar - Getting Started With Ceph

Access Ceph’s Virtual Block Device ubuntu@ceph-client:~$ rbd ls rbd: pool rbd doesn't contain rbd images

ubuntu@ceph-client:~$ rbd create myLun --size 4096 ubuntu@ceph-client:~$ rbd ls -l NAME SIZE PARENT FMT PROT LOCK

myLun 4096M 1

ubuntu@ceph-client:~$ sudo modprobe rbd

ubuntu@ceph-client:~$ sudo rbd map myLun --pool rbd ubuntu@ceph-client:~$ sudo rbd showmapped id pool image snap device

0 rbd myLun - /dev/rbd0

ubuntu@ceph-client:~$ ls -l /dev/rbd rbd/ rbd0

ubuntu@ceph-client:~$ ls -l /dev/rbd/rbd/myLun … 1 root root 10 Jan 16 21:15 /dev/rbd/rbd/myLun -> ../../rbd0

ubuntu@ceph-client:~$ ls -l /dev/rbd0

brw-rw---- 1 root disk 251, 0 Jan 16 21:15 /dev/rbd0

Page 24: Webinar - Getting Started With Ceph

Format RBD image and use it ubuntu@ceph-client:~$ sudo mkfs.ext4 -m0 /dev/rbd/rbd/myLun mke2fs 1.42 (29-Nov-2011)

...

Writing superblocks and filesystem accounting information: done

ubuntu@ceph-client:~$ sudo mkdir /mnt/myLun

ubuntu@ceph-client:~$ sudo mount /dev/rbd/rbd/myLun /mnt/myLun ubuntu@ceph-client:~$ df -h | grep myLun

/dev/rbd0 4.0G 190M 3.9G 5% /mnt/myLun

ubuntu@ceph-client:~$ sudo dd if=/dev/zero of=/mnt/myLun/testfile bs=4K count=128

128+0 records in

128+0 records out

524288 bytes (524 kB) copied, 0.000431868 s, 1.2 GB/s

ubuntu@ceph-client:~$ ls -lh /mnt/myLun/

total 528K

drwx------ 2 root root 16K Jan 16 21:24 lost+found

-rw-r--r-- 1 root root 512K Jan 16 21:29 testfile

Page 25: Webinar - Getting Started With Ceph

Access Ceph Distributed Filesystem ~$ sudo mkdir /mnt/myCephFS ~$ sudo mount.ceph ceph-node1,ceph-node2,ceph-node3:/ /mnt/myCephFS

~$ df -h | grep my 192.168.56.21,192.168.56.22,192.168.56.23:/ 48G 11G 38G 22% /mnt/myCephFS

/dev/rbd0 4.0G 190M 3.9G 5% /mnt/myLun

~$ sudo dd if=/dev/zero of=/mnt/myCephFS/testfile bs=4K count=128 128+0 records in

128+0 records out

524288 bytes (524 kB) copied, 0.000439191 s, 1.2 GB/s

~$ ls -lh /mnt/myCephFS/ total 512K

-rw-r--r-- 1 root root 512K Jan 16 23:04 testfile

Page 26: Webinar - Getting Started With Ceph

Unmount, Stop Ceph, and Halt ubuntu@ceph-client:~$ sudo umount /mnt/myCephFS ubuntu@ceph-client:~$ sudo umount /mnt/myLun/ ubuntu@ceph-client:~$ sudo rbd unmap /dev/rbd0 ubuntu@ceph-client:~$ ssh ceph-node1 sudo service ceph -a stop === mon.a === Stopping Ceph mon.a on ceph-node1...kill 19863...done === mon.b === === mon.c === === mds.a === === osd.0 === === osd.1 === === osd.2 === === osd.3 === === osd.4 === === osd.5 === ubuntu@ceph-client:~$ ssh ceph-node1 sudo service halt stop * Will now halt ^Cubuntu@ceph-client:~$ ssh ceph-node2 sudo service halt stop * Will now halt ^Cubuntu@ceph-client:~$ ssh ceph-node3 sudo service halt stop * Will now halt ^Cubuntu@ceph-client:~$ sudo service halt stop * Will now halt

Page 27: Webinar - Getting Started With Ceph

Review

We: 1.  Created the VirtualBox VMs 2.  Prepared the VMs for Creating the Ceph Cluster 3.  Installed Ceph on all VMs from the Client 4.  Configured Ceph on all the server nodes and the client 5.  Experimented with Ceph’s Virtual Block Device (RBD) 6.  Experimented with the Ceph Distributed Filesystem 7.  Unmounted, stopped Ceph, and shut down the VMs safely

•  Based on VirtualBox; other hypervisors work too. •  Relaxed security best practices to speed things up, but

recommend following them in most circumstances.

Page 28: Webinar - Getting Started With Ceph

Resources for Learning More

Page 29: Webinar - Getting Started With Ceph

Leverage great online resources

Documentation on the Ceph web site: •  http://ceph.com/docs/master/

Blogs from Inktank and the Ceph community: •  http://www.inktank.com/news-events/blog/ •  http://ceph.com/community/blog/

Developer resources: •  http://ceph.com/resources/development/ •  http://ceph.com/resources/mailing-list-irc/ •  http://dir.gmane.org/gmane.comp.file-systems.ceph.devel

Page 30: Webinar - Getting Started With Ceph

What Next?

30

Page 31: Webinar - Getting Started With Ceph

Try it yourself!

•  Use the information in this webinar as a starting point •  Consult the Ceph documentation online:

http://ceph.com/docs/master/ http://ceph.com/docs/master/start/

Page 32: Webinar - Getting Started With Ceph

Inktank’s Professional Services

Consulting Services: •  Technical Overview •  Infrastructure Assessment •  Proof of Concept •  Implementation Support •  Performance Tuning

Support Subscriptions: •  Pre-Production Support •  Production Support

A full description of our services can be found at the following: Consulting Services: http://www.inktank.com/consulting-services/ Support Subscriptions: http://www.inktank.com/support-services/

32

Page 33: Webinar - Getting Started With Ceph

Check out our upcoming webinars

1.  Introduction to Ceph with OpenStack January 24, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/63177

2.  DreamHost Case Study: DreamObjects with Ceph

February 7, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/63181

3.  Advanced Features of Ceph Distributed Storage

(delivered by Sage Weil, creator of Ceph) February 12, 2013 10:00AM PT, 12:00PM CT, 1:00PM ET https://www.brighttalk.com/webcast/8847/63179

Page 34: Webinar - Getting Started With Ceph

Contact Us

[email protected] 1-855-INKTANK Don’t forget to follow us on: Twitter: https://twitter.com/inktank Facebook: http://www.facebook.com/inktank YouTube: http://www.youtube.com/inktankstorage