improving software-defined storage outcomes through
TRANSCRIPT
1
Improving Software-defined Storage Outcomes Through Telemetry Insights
SUP-1312
Lars Marowsky-Brée
Distinguished Engineer
2
Agenda
1. Goals and Motivation
2. Data collection methodology
3. Scope and limitations
4. Exploratory Analysis
5. Pretty pictures
6. Q&A
3
Goals And Motivation (Developer Side)
• Improve product/project decisions
• Understand actual deployments
• Detect anomalies and trends pro-actively
4
Automated Telemetry Augments Support
• Support cases only opened once an issue has escalated to human
attention
• Data from support incidents biased towards unhealthy environments
• We want to identify issues before they escalate to support incidents and
better understand impact of reported support incidents
5
Goals And Motivation (User/Customer Pov)
• Improve product/project decisions to reflect your usage
• Make sure developers understand your deployments
• Detect anomalies and trends pro-actively before they affect your
systems
6
Automated Telemetry Vs Surveys
• Surveys are limited in scope and depth
• Survey provides qualitative data and human insights
• Telemetry is automated and delivers more frequent updates
• Telemetry has fewer typos :-)
• Automated telemetry + surveys: <3
7
Sneak Peek: Community Survey’19
• 404 responses
• Total capacity reported: ~1184 PB
• Unclear, since obviously not all units were aligned
• 33% said they have enabled Telemetry already <3
• … does this match the reports?
• Full(er) analysis upcoming
8
84 Weren’t aware the feature existed
74 Wish to understand data privacy better
54 Run Ceph versions that do not support it yet
33 Are in firewalled or airgapped environments
Why Users Have Not Enabled Telemetry
9
Telemetry Methodology
• Ceph clusters report aggregate statistics
• Data is anonymized, no IP addresses/hostnames/... stored!
• “Upstream first” via the Ceph Foundation
• Community Data License Agreement – Sharing, Version 1.0
• Shared data corpus improves outcomes
• Opt-in, not (yet) enabled by default
• # ceph telemetry on
10
Ceph Community Support For Telemetry
• Upstream support began in Ceph Mimic
• Significant enhancements in Nautilus
• SUSE backported to Luminous
• Supported in:
• SUSE Enterprise Storage 5.5 Maintenance Updates (upcoming)
• SUSE Enterprise Storage 6
11
Examples Of Data Included With Telemetry
• Total aggregate for capacity and usage
• Number of OSDs, MONs, hosts
• Versions (Ceph, kernel, distribution) aggregates
• CephFS metrics, number of RBDs, pool data
• Crashes (can be disabled separately)
# ceph telemetry show
12
Limitations – Caveat, emptor
Biased sample!
• “Recent” versions only
• Not enabled by default, users need to actively enable
• Environments need access to Internet for upload
• Enterprise environments likely under-represented
Thus: not representative of whole population, treat with care!
Trends, don’t worry about exact numbers
13
Exploratory Data Analysis
• Python (ipython, pandas)
• Data preparation – clean-up, flatten into table
• Resample to common intervals (daily, extrapolated)
• Start evaluating the data
• Find errors in data set, go back to 1
• Enjoyed SUSE’s HackWeek 2020 very much!
14
Time For Pretty Pictures
• Overall trends
• Example of finding a bug
• Version and feature adoption
• Identifying most common practices
• Sizing in the real world
15
How Many Clusters Are Reporting In?
16
Total Capacity Reporting (Petabytes)
17
In [183]: t_on = survey[
survey['Is telemetry enabled in your cluster?'] == 'Yes']
In [184]: t_on['Total raw capacity'].agg('sum')/10**3
Out[184]: 280.126
In [185]: t_on['How many clusters ...'].agg('sum')
Out[185]: 308.0
Cross-checking This With The Survey Results:
18
Major Ceph Versions In The Field
19
Breakdown Of Ceph v14.x.y On OSDs In
The Field
20
v14.x.y Again, But Normalized
21
When Do People Update?
• Important for staff planning etc
• Compute rate of change per version for every day
• Excursion: total flow through versions
• Aggregate the absolute values per day for total rate of change
• Aggregate by day of week
… also a good example of the caveats to be mindful of:
22
Versions Change Aggregated By Day-of-week
23
Placement Groups: How Many Per Pool?
• Quite important for the even balancing of data
• Rule of thumb is to have ~100 PGs per OSD
• Should be rounded to a power of two
• Exact formula is a bit more difficult as it varies with the data
distribution between pools, pool “size”, ...
• What do users do?
24
Top 20 pg_num Values Across All Pools …?!
25
pg_num – power Of Two Or Not
26
How Did The Ceph Project Remedy This?
• Improve documentation, remove bad example, clarify impact
• Improve UI/UX experience
• Add HEALTH_WARN if state is detected
• Introduce pg_autoscaler to fully automate this
• Available in SUSE Enterprise Storage 6 MU
https://ceph.io/community/the-first-telemetry-results-are-in/
27
Adoption Of pg_autoscaler functionality
28
Power Of Two pg_num with pg_autoscaler On:
29
Prioritization
• What is the actual usage pattern?
• How significant would an issue in a specific feature/area be?
• Focus QA and assess support incident impact
• But also: understand why some users are holding out on a “legacy”
feature
• Are we ready to depreciate something?
30
How Many OSDs Remain On FileStore?
31
No Of Pools: Replicated Vs Erasure Coding
32
No Of Clusters: Replicated Vs Erasure Coding
33
Which Erasure Code Plugins are used?
34
EC: Which k+m Values Are Chosen?
35
What Defaults Do Users Most Frequently Change?
36
Let’s Talk Real World Sizing
• Everyone wants to know what other people do
• Reflects market sweet spots
• Currently only a snapshot, not enough data to identify hardware trends
37
Deployed Densities, Device Sizes (Quartiles)
0.25 0.5 0.75 1.0
OSD/host 3 6 11 63
OSD/host < 1PB 3 5 9 63
OSD/host > 1PB 13 16 24 58
TB/OSD 1 4 7 14
TB/OSD < 1PB 1 3 5 14
TB/OSD > 1PB 6 10 11 12
TB/host 4 16 50 630
TB/host < 1PB 3 12 40 186
TB/host >1PB 61 128 199 630
38
OSDs: Rotational Vs flash/SSD/NVMe
39
OSDs: Rotational Vs flash/SSD/Nvme, >=1PB
40
Future Enhancements
Support different telemetry transport methods (with registration?)
Include more relevant metrics as identified by yet unanswerable questions
• Performance metrics, OSD variance, per-pool capacity/usage, client versions/numbers …
• Device and fault data for predictive failure analysis
• Data mining crash data
Automated dashboards on Ceph site: https://telemetry-public.ceph.com/
Consider if/how to enable this by default once acceptance is up
41
Questions? Answers!
# ceph telemetry on
Help us serve you better.
42
Questions?
43
General Disclaimer
This document is not to be construed as a promise by any participating company to
develop, deliver, or market a product. It is not a commitment to deliver any material,
code, or functionality, and should not be relied upon in making purchasing
decisions. SUSE makes no representations or warranties with respect to the contents of
this document, and specifically disclaims any express or implied warranties of
merchantability or fitness for any particular purpose. The development, release, and
timing of features or functionality described for SUSE products remains at the sole
discretion of SUSE. Further, SUSE reserves the right to revise this document and to
make changes to its content, at any time, without obligation to notify any person or entity
of such revisions or changes. All SUSE marks referenced in this presentation are
trademarks or registered trademarks of SUSE, LLC, Inc. in the United States and other
countries. All third-party trademarks are the property of their respective owners.