introduction to scientific data management · – result with context information: json data –...
TRANSCRIPT
1
Introduction to Scientific Data Management
[email protected] 2015
http://www.cism.ucl.ac.be/training
2
Goal of this session:
“Share tools, tips and tricks related to the storage, transfer, and sharing
of scientific data”
http://www.cism.ucl.ac.be/training
3
1.
Data storageBlock – File – Object
Databases
4
Data storage
Medium (Flash, Hard Drives, Tapes, DRAM)
LVM
RAID JBODErasure coding
software RAID
Local filesystem Block storage
Attachment (IDE, SAS, SATA, iSCSI, ATAoE, FC)
RDBMSObj store
Global filesystemNoSQL
Schema Serialization (file formats, etc)
5
Storage Medium Technologies
http://www.slideshare.net/IMEXresearch/ss-ds-ready-for-enterprise-cloud
6
Storage performances
http://www.slideshare.net/IMEXresearch/ss-ds-ready-for-enterprise-cloud
7
Storage safety (RAID)
https://en.wikipedia.org/wiki/Standard_RAID_levels
8
Data storage
Medium (Flash, Hard Drives, Tapes, DRAM)
LVM
RAID JBODErasure coding
software RAID
Local filesystem Block storage
Attachment (IDE, SAS, SATA, iSCSI, ATAoE, FC)
RDBMSObj store
Global filesystemNoSQL
Schema Serialization (file formats, etc)
9
Storage abstraction levels
10
(local) Filesystems
http://arstechnica.com/information-technology/2014/01/bitrot-and-atomic-cows-inside-next-gen-filesystems/
Generation 0: No system at all. There was just an arbitrary stream of data. Think punchcards, data on audiocassette, Atari 2600 ROM carts.
Generation 1: Early random access. Here, there are multiple named files on one device with no folders or other metadata. Think Apple ][ DOS (but not ProDOS!) as one example.
Generation 2: Early organization (aka folders). When devices became capable of holding hundreds of files, better organization became necessary. We're referring to TRS-DOS, Apple //c ProDOS, MS-DOS FAT/FAT32, etc.
Generation 3: Metadata—ownership, permissions, etc. As the user count on machines grew higher, the ability to restrict and control access became necessary. This includes AT&T UNIX, Netware, early NTFS, etc.
Generation 4: Journaling! This is the killer feature defining all current, modern filesystems—ext4, modern NTFS, UFS2, you name it. Journaling keeps the filesystem from becoming inconsistent in the event of a crash, making it much less likely that you'll lose data, or even an entire disk, when the power goes off or the kernel crashes.
Generation 5: Copy on Write snapshots, Per-block checksumming, Volume management, Far-future scalability, Asynchronous incremental replication, Online compression. Generation 5 filesystems are Btrfs and ZFS.
11
Network filesystem
NAS: ex. NFS SAN: ex. GFS
One source many consumers
Pictures from https://www.redhat.com/magazine/008jun05/features/gfs_nfs/
12
Parallel filesystem
ex: Lustre, GPFS, BeeGeeFS GlusterFSMany sources many consumers
Pictures from https://www.redhat.com/magazine/008jun05/features/gfs_nfs/
13
Special filesystems – in memory
14
Filesystems
15
What filesystem for what usage
● Home (NFS) : Small size, Small I/Os
● Global scratch (parallel FS) : Large size, Large I/Os
● Local scratch (local FS): Medium size, Large I/Os
● In-memory (tmpfs): Small Size, Large I/Os
● Mass storage (NFS); Large size, Small I/Os
16
Text File Formats – JSON, YML, XML
17
Text File Formats – CSV,TSV
18
Binary File Formats – CDF, HDF
http://pro.arcgis.com/en/pro-app/help/data/multidimensional/fundamentals-of-netcdf-data-storage.htm
19
Binary File Formats – CDF, HDF
https://www.nersc.gov/users/training/online-tutorials/introduction-to-scientific-i-o/
20
What file format for what usage
● Meta data
– Configuration file: INI, YAML
– Result with context information: JSON● Data
– Small data (kBs): CSV, TSV
– Medium data (MBs): compressed CSV
– Large data (GBs): netCDF, HDF5, DXMF
– Huge data (TBs): Database, Object store (“loss of innocence”)
Use dedicated libraries to write and read them
21
Object storage
● Object: data (e.g. file) + meta data
● Often built on erasure coding
● Scale out easily
● Useful for web applications
● Examples:
– Openstack Swift
– Ceph
22
NoSQL
Pictures from http://www.tomsitpro.com/articles/rdbms-sql-cassandra-dba-developer,2-547-2.html
23
RDBMS
Pictures from http://www.ibm.com/developerworks/library/x-matters8/
SQL: SELECT AuthorName, Title FROM Books JOIN Authors ON Authors.AuthorID=Books.AuthorID
24
RDBMS
Pictures from http://www.ibm.com/developerworks/library/x-matters8/
● Mostly needed for categorical data and alphanumerical data (not suited for matrices, but good for end-results)
● Indexes make finding a data element is very fast(and computing sums, maxima, etc.)
● Encodes relations between data (constraints, etc)
● Atomicity, Consistency, Isolation, and Durability
25
When to use?
– when you have a large number of small files
– when you perform a lot of direct writes in a large file
– when you want to keep structure/relations between data
– when software crashes have a non-negligible probability
– when files are update by several processes● When not to use:
– only sequential access
– simple matrices/vectors, etc.
– direct access on fixed-size records and no structure
26
2.
Data transferfaster and less secure
parallel transfers
27
scp -c cipher ...
http://blog.famzah.net/2010/06/11/openssh-ciphers-performance-benchmark/
28
Fastest: No encryption at all
● Need friendly firewall (choose direction accordingly)
● Only over trusted networks
● If rsh is installed: rcp instead of scp
29
Fastest: No encryption at all
● Need friendly firewall (choose direction accordingly)
● Only over trusted networks
● If rsh is installed: rcp instead of scp
● If rsh is not installed: nc on both ends
30
Resuming transfers
● When nothing changed but the transfer was interrupted
– size-only: do not perform byte-level file comparison
31
Resuming transfers
● When nothing changed but the transfer was interrupted
– append: do not re-check partially transmitted files and resume the transfer where it was abandoned assuming first transfer attempt was with scp or with rsync --inplace
32
Parallel data transfer: bbcp
● Better use of the bandwidth than SCP
● Needs to be installed on both sides (easy to install)
● Needs friendly firewalls
33
Parallel data transfers: parsync
http://moo.nac.uci.edu/~hjm/parsync/
34
Parallel data transfers: sbcast
35
Transferring ZOT files
● Zillions Of Tiny files
● More meta-data than data → large overhead for rsync
● Solution: Pre-tar or tar on the fly
● Needs friendly firewall
● Also avoid 'ls' and '*' as they sort the output. Favor 'find'
36
3.
Data sharingwith other users (Unix permissions, Encryption)
with external users (Owncloud)
37
Data sharing
Data sharing with other users
38
Sharing with all other users
39
Sharing with the group
40
Sharing and hiding
41
Sharing and encrypting
42
Data sharing
Data sharing with external users
43
Data sharing with external users
● owncloud
CISM login
44
Dropbox-like
45
External SFTP connectors
46
Dropbox-like
47
My home on Manneback
48
Can create a share URL
49
And distribute it
50
Summary:
Storage: choose the right filesystem and the right file format
Transfer: use the parallel tools when possible and limit encryption in favor of throughput
Sharing: use all the potential of the UNIX permissions and try Owncloud
http://www.cism.ucl.ac.be/training