qct ceph solution - design consideration and reference architecture
TRANSCRIPT
![Page 1: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/1.jpg)
QCT Ceph Solution – Design
Consideration and Reference
ArchitectureGary LeeAVP, QCT
![Page 2: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/2.jpg)
2
• Industry Trend and Customer Needs
• Ceph Architecture
• Technology
• Ceph Reference Architecture and QCT Solution
• Test Result
• QCT/Red Hat Ceph Whitepaper
AGENDA
![Page 3: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/3.jpg)
3
Industry Trendand
Customer Needs
![Page 4: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/4.jpg)
4
![Page 5: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/5.jpg)
5
• Structured Data -> Unstructured/Structured Data
• Data -> Big Data, Fast Data
• Data Processing -> Data Modeling -> Data Science
• IT -> DT
• Monolithic -> Microservice
Industry Trend
![Page 6: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/6.jpg)
6
• Scalable Size• Variable Type• Longivity Time• Distributed Location• Versatile Workload
• Affordable Price• Available Service• Continuous Innovation• Consistent
Management• Neutral Vendor
Customer Needs
![Page 7: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/7.jpg)
7
Ceph Architecture
![Page 8: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/8.jpg)
8
Ceph Storage Cluster
Cluster NetworkCeph
LinuxCPU
Memory
SSDHDDNIC
Ceph
Linux
CPUMemor
ySSDHDDNIC
Ceph
Linux
CPUMemor
ySSDHDDNIC
Ceph
Linux
CPUMemor
ySSDHDDNIC
Object Block File
UnifiedStorage
Scale-outCluster
Open Source Software
Open Commodity Hardware
…..
![Page 9: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/9.jpg)
9
Block I/O
Ceph Client
RBD RADOSGW
Ceph FS
Object I/O
File I/O
RADOS/Cluster Network
OSD
File System
I/O
Disk I/O
Public N
etwork
End-to-end Data Path
App Service
![Page 10: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/10.jpg)
10
Ceph Software Architecture
![Page 11: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/11.jpg)
Public Network (ex. 10GbE or 40GbE)
Cluster Network (ex. 10GbE or 40GbE)
Ceph Monitor
…...
RCT or RCCNx Ceph OSD Nodes
Ceph OSD Node
Clients
Ceph OSD Node Ceph OSD Node
Ceph Hardware Architecture
![Page 12: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/12.jpg)
12
Technology
![Page 13: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/13.jpg)
13
• 2x Intel E5-2600 CPU• 16x DDR4 Memory• 12x 3.5” SAS/SATA HDD• 4x SATA SSD + PCIe M.2• 1x SATADOM• 1x 1G/10G NIC• BMC with 1G NIC• 1x PCIe x8 Mezz Card• 1x PCIe x8 SAS Controller• 1U
QCT Ceph Storage ServerD51PH-1ULH
![Page 14: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/14.jpg)
14
• Mono/Dual Node • 2x Intel E5-2600 CPU• 16x DDR4 Memory• 78x or 2x 35x SSD/HDD• 1x 1G/10G NIC• BMC with 1G NIC• 1x PCIe x8 SAS Controller• 1x PCIe x8 HHLH Card• 1x PCIe x16 FHHL Card• 4U
QCT Ceph Storage ServerT21P-4U
![Page 15: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/15.jpg)
15
• 1x Intel Xeon D SoC CPU• 4x DDR4 Memory• 12x SAS/SATA HDD• 4x SATA SSD• 2x SATA SSD for OS• 1x 1G/10G NIC• BMC with 1G NIC• 1x PCIe x8 Mezz Card• 1x PCIe x8 SAS Controller• 1U
QCT Ceph Storage ServerSD1Q-1ULH
![Page 16: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/16.jpg)
16
• Standalone, without EC
• Standalone, with EC
• Hyper-converged, without EC
• High Core vs. High Frequency
• 1x OSD ~ (0.3-0.5)x Core + 2G RAM
CPU/Memory
![Page 17: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/17.jpg)
17
• SSD:– Journal– Tier– File System Cache– Client Cache
• Journal– HDD: SSD (SATA/SAS): 4~5– HDD: NVMe: 12~18
SSD/NVMe
![Page 18: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/18.jpg)
18
• 2x NVMe ~40Gb
• 4x NVMe ~100Gb
• 2x SATA SSD ~10Gb
• 1x SAS SSD ~10Gb
• (20~25)x HDD ~10Gb
• ~100x HDD ~40Gb
NIC10G/40G -> 25G/100G
![Page 19: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/19.jpg)
19
• CPU Offload through RDMA/iWARP
• Erasure Coding Offload
• Allocate computing on different silicon areas
NICI/O Offloading
![Page 20: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/20.jpg)
20
• Object Replication– 1 Primary + 2 Replica (or more) – CRUSH Allocation Ruleset
• Erasure Coding– [k+m], e.g. 4+2, 8+3– Better Data Efficiency
• k/(k+m) vs. 1/(1+replication)
Erasure Coding vs. Replication
![Page 21: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/21.jpg)
21
Size/Workload Small Medium Large
Throughput Transfer BandwidthSequential R/W
Capacity Cost/capacityScalability
IOPS IOPS/ per 4k BlockRandom R/W
Hyper-converged ?
Desktop Virtualization
LatencyRandom R/W
Hadoop ?
Workload and Configuration
![Page 22: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/22.jpg)
22
Red Hat Ceph
![Page 23: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/23.jpg)
23
• Intel ISA-L
• Intel SPDK
• Intel CAS
• Mellanox Accelio Library
Vendor-specific Value-added Software
![Page 24: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/24.jpg)
24
Ceph Reference Architecture and
QCT Solution
![Page 25: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/25.jpg)
• Trade-off among Technologies
• Scalable in Architecture
• Optimized for Workload
• Affordable as Expected
Design Principle
![Page 26: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/26.jpg)
26
1. Needs for scale-out storage
2. Target workload
3. Access method
4. Storage capacity
5. Data protection methods
6. Fault domain risk tolerance
Design Considerations
![Page 27: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/27.jpg)
27
Transaction
DataWarehou
se Big Data
ScientificBlock Transf
erAudio Video
IOP
S
MB/sec
OLTP
OLAP
HPC
Streaming
DB
Storage Workload
![Page 28: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/28.jpg)
SMALL (500TB*) MEDIUM (>1PB*) LARGE (>2PB*)Throughput optimized
QxStor RCT-200 16x D51PH-1ULH (16U)• 12x 8TB HDDs• 3x SSDs• 1x dual port 10GbE• 3x replica
QxStor RCT-400 6x T21P-4U/Dual (24U)• 2x 35x 8TB HDDs• 2x 2x PCIe SSDs• 2x single port 40GbE• 3x replica
QxStor RCT-400 11x T21P-4U/Dual (44U)• 2x 35x 8TB HDDs• 2x 2x PCIe SSDs• 2x single port 40GbE• 3x replica
Cost/Capacity optimized
IOPS optimized Future direction Future direction NA
* Usable storage capacity
QxStor RCC-400Nx T21P-4U/Dual • 2x 35x 8TB HDDs• 0x SSDs• 2x dual port 10GbE• Erasure Coding 4:2
QCT QxStor Red Hat Ceph Storage Edition PortfolioWorkload-driven Integrated Software/Hardware Solution
![Page 29: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/29.jpg)
• Densest 1U Ceph building block • Best reliability with smaller
failure domain
• Scale at high scale 2x 280TB• At once obtain best throughput
and density
• Block or object storage • 3x replication • Video, audio, image repositories, and streaming media
• Highest density 560TB raw capacity per chassis with greatest price/performance
• Typically object storage• Erasure coding common
for maximizing usable capacity • Object archive
Throughput-Optimized RCC-400RCT-200 RCT-400
Cost/Capacity-Optimized
USE
CAS
E
QCT QxStor Red Hat Ceph Storage Edition Co-engineered with Red Hat Storage team to provide Optimized Ceph Solution
![Page 30: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/30.jpg)
30
Ceph Solution DeploymentUsing QCT QPT Bare Metal Privision Tool
![Page 31: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/31.jpg)
31
Ceph Solution DeploymentUsing QCT QPT Bare Metal Privision Tool
![Page 32: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/32.jpg)
32
QCT Solution Value Proposition
• Workload-driven
• Hardware/software pre-validated, pre-optimized and
pre-integrated
• Up and running in minutes
• Balance between production (stable) and innovation
(up-streaming)
![Page 33: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/33.jpg)
33
Test Result
![Page 34: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/34.jpg)
Client 1S2B
Client 2S2B
Client 3S2B
Ceph 1S2PH
Ceph 2S2PH
Ceph 3S2PH
Ceph 5S2PH
Ceph 4S2PH
Client 8S2B
Client 9S2B
Client 10S2B
10Gb
10Gb
Public Network
Cluster Network
General Configuration
• 5 Ceph nodes (S2PH) with each 2 x 10Gb link.• 10 Client nodes (S2B) with each 2 x 10Gb link.
• Public network : Balanced bandwidth between Client nodes and Ceph nodes.• Cluster network : Offload the traffic from public network to improve performance.
Option 1 (w/o SSD) a. 12 OSD per Ceph storage node b. S2PH (E5-2660) x2 c. RAM : 128 GB
Option 2 : (w/ SSD) a. 12 OSD / 3 SSD per Ceph storage node b. S2PH (E5-2660) x2 c. RAM : 12 (OSD) x 2GB = 24 GB
Testing Configuration (Throughput-Optimized)
![Page 35: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/35.jpg)
Client 1S2S
Client 2S2S
Client 3S2S
Ceph 1S2P
Ceph 2S2P
Client 6S2S
Client 7S2S
Client 8S2S
10Gb
Public Network
40Gb 40Gb
General Configuration
• 2 Ceph nodes (S2P) with each 2 x 10Gb link.• 8 Client nodes (S2S) with each 2 x 10Gb link.
• Public network : Balanced bandwidth between Client nodes and Ceph nodes.• Cluster network : Offload the traffic from public network to improve performance.
Option 1 (w/o SSD) a. 35 OSD per Ceph storage node b. S2P (E5-2660) x2 c. RAM : 128 GB
Option 2 : (w/ SSD) a. 35 OSD / 2 PCI-SSD per Ceph storage node b. S2P (E5-2660) x2 c. RAM : 128 GB
Testing Configuration (Capacity-Optimized)
![Page 36: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/36.jpg)
36
Level Component Test SuiteRaw I/O Disk FIONetwork I/O Network iperfObject API I/O librados radosbenchObject I/O RGW CosbenchBlock I/O RBD librbdfio
CBT (Ceph Benchmarking Tool)
![Page 37: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/37.jpg)
37
Linear Scale Out
![Page 38: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/38.jpg)
38
Linear Scale Up
![Page 39: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/39.jpg)
39
Price, in terms of Performance
![Page 40: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/40.jpg)
40
Price, in terms of Capacity
![Page 41: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/41.jpg)
41
Protection Scheme
![Page 42: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/42.jpg)
42
Cluster Network
![Page 43: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/43.jpg)
43
QCT/Red Hat Ceph
Whitepaper
![Page 44: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/44.jpg)
44
http://www.qct.io/account/download/download?order_download_id=1022&dtype=Reference%20Architecture
QCT/Red Hat Ceph Solution Brief
![Page 45: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/45.jpg)
https://www.redhat.com/en/files/resources/st-performance-sizing-guide-ceph-qct-inc0347490.pdf
http://www.qct.io/Solution/Software-Defined-Infrastructure/Storage-Virtualization/QCT-and-Red-Hat-Ceph-Storage-p365c225c226c230
QCT/Red Hat Ceph Reference Architecture
![Page 46: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/46.jpg)
46
• The Red Hat Ceph Storage Test Drive lab in QCT Solution Center provides you a free hands-on experience. You'll be able to explore the features and simplicity of the product in real-time.
• Concepts:Ceph feature and functional test
• Lab Exercises:Ceph BasicsCeph Management - Calamari/CLICeph Object/Block Access
QCT Offer TryCeph (Test Drive) Later
![Page 47: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/47.jpg)
47
Remote access to QCT cloud solution centers• Easy to test. Anytime and anywhere.
• No facilities and logistic needed
• Configurations• RCT-200 and newest QCT solutions
QCT Offer TryCeph (Test Drive) Later
![Page 48: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/48.jpg)
48
• Ceph is Open Architecture• QCT, Red Hat and Intel collaborate to provide
– Workload-driven,– Pre-integrated, – Comprehensive-tested and – Well-optimized solution
• Red Hat – Open Software/Support PioneerIntel – Open Silicon/Technology InnovatorQCT – Open System/Solution Provider
• Together We Provide the Best
CONCLUSION
![Page 49: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/49.jpg)
www.QuantaQCT.com
Thank you!
![Page 50: QCT Ceph Solution - Design Consideration and Reference Architecture](https://reader036.vdocuments.net/reader036/viewer/2022062522/58e74a2d1a28abd63a8b5865/html5/thumbnails/50.jpg)
50
www.QCT.io
QCT CONFIDENTIAL
Looking forinnovative cloud solution?
Come to QCT, who else?