best practices guide for using ibm spectrum protect with ... · using ibm spectrum protect with...

16
Best Practices Guide for using IBM Spectrum Protect with Cohesity

Upload: vankiet

Post on 09-Jun-2018

278 views

Category:

Documents


3 download

TRANSCRIPT

Best Practices Guide for using IBM Spectrum Protect with Cohesity

1.

AbstractThis white paper outlines the best practices for using Cohesity as target storage by IBM Spectrum Protect.

December 2017

Table of Contents

About This Guide ..............................................................................................................................................2

Intended Audience ...........................................................................................................................................2

Terminology ........................................................................................................................................................2

Cohesity View Box / Namespaces ..............................................................................................................2

Cohesity View .....................................................................................................................................................2

Abbreviations .....................................................................................................................................................2

Solution Components ......................................................................................................................................3

Cohesity Overview ...........................................................................................................................................3

IBM Spectrum Protect Overview ................................................................................................................4

Logical Data Flow .............................................................................................................................................5

Best Practices .....................................................................................................................................................5

Provisioning Storage for IBM Spectrum Protect ...................................................................................5

Creating the View Box and View/Share ....................................................................................................5

Cohesity View Box ............................................................................................................................................6

Cohesity View .....................................................................................................................................................6

Protocols ..............................................................................................................................................................6

IBM Spectrum Protect Storage Pools ........................................................................................................6

Storage Pool Types2 ........................................................................................................................................6

Storage Pool Performance Comparison when using Cohesity ........................................................7

Mounting the NFS file systems and Configuring IBM Spectrum Protect Storage Pools.........7

Sequential-access storage pool (Device Class FILE) ...........................................................................7

IBM Spectrum Protect Configuration Option .........................................................................................8

Mounting Options .............................................................................................................................................9

Creating Mount Points .....................................................................................................................................10

Creating the Device Class and Storage Pool ..........................................................................................10

Directory-container Storage pool ...............................................................................................................11

Mounting Options ..............................................................................................................................................11

Creating Mount Points .....................................................................................................................................11

Creating the Directory Container Storage Pool .....................................................................................12

Backups and Restores .....................................................................................................................................13

About the Author ..............................................................................................................................................14

Version History ...................................................................................................................................................14

Reference .............................................................................................................................................................15

©2017 Cohesity, All Rights Reserved

About This Guide Hyperconvergence is becoming a norm in data centers today. Companies adopting this next generation infrastructure have realized significant savings in TCO/ROI. These savings are the result of vastly simplified architectures, lower power and cooling needs, workload consolidation, smaller hardware footprint and “pay as you grow” consumption model. SpanFS is a completely new file system designed specifically for secondary storage consolidation. At the topmost layer, SpanFS exposes industry-standard, globally distributed NFS, SMB, and S3 interfaces. Cohesity is unique in its ability to support unlimited, frequent snapshots with no performance degradation. SpanFS has QoS controls built at all layers of the stack to support various workloads and can replicate, archive and tier data to another Cohesity cluster or to the cloud. What ties all these benefits together is the simplicity of managing these web scale platforms from a single UI. The design principles of distributed control and data planes that eliminate complexities in infrastructure and management make hyperconverged architectures attractive and bring overall value to end customers. Cohesity along with IBM Spectrum Protect can provide a robust, scalable, and simple to administer solution while also allowing for seamless growth. Cohesity provides a globally deduplicated, scale-out storage target that is natively integrated with the public cloud. It interoperates with IBM Spectrum Protect to provide a very robust, scale-out data protection solution. This document describes how to configure and use Cohesity as a target for IBM Spectrum Protect. Intended Audience This paper is written for System and IBM Spectrum Protect Administrators who plan to configure and use Cohesity as target storage for IBM Spectrum Protect. Cohesity uses floating/virtual IPs to provide the highest availability and load balancing. Each Cohesity cluster should have an equal number of VIPs per physical nodes. Always mount views (shares) using the VIPs. In the event of a node failure, the VIP on that node will automatically move to another Cohesity node, thus staying available to serve requests. Once the node failure is resolved, the VIP will then move back. This happens automatically.

Cohesity recommends having familiarity with the following:

• Cohesity DataPlatform

• IBM Spectrum Protect Server Administration

Terminology Cohesity View Box / Namespaces A Coheisty View Box is separate shared namespace that have common data reduction, availability or archive policies. For the purpose of this document, a View Box will contain the Views (NFS,SMB,etc). If de-dup is enabled for the View Box, all data will be de-duped both within the View and across all other Views within the View Box. Cohesity ViewA View is simply a file share, or a logical grouping of files within a View Box.

2.

Abbreviation

Abbreviations

NFS

SMB

S3

VIP

QoS

Network File System

Server Message Block

Simple Storage Service

Virtual IP

Quality of Service

Description

©2017 Cohesity, All Rights Reserved

Solution Components The following components were used for interoperably testing

Cohesity Overview Cohesity introduced the world’s first scale-out data management platform to enable organizations to standardize secondary workflows on a unified and fully distributed solution. Cohesity’s scale-out distributed file system SpanFSTM which was built from the ground up to ensure complete scalability to enable organizations to flexibly grow their environment by adding nodes to a cluster. With this scalability, organizations can eliminate the costs of data migrations and forklift upgrades, while benefiting from the simplicity of a homogenous solution. SpanFS also provides global, variable-length deduplication and unlimited snapshots and clones - making it the ideal storage target for enterprise environments.

Cohesity cluster nodes have a shared-nothing topology and there is no single point of failure or inherent bottlenecks. Consequently both performance and capacity can scale linearly as more physical nodes are added to the cluster. The distributed file system spans across all nodes in the cluster and natively provides global deduplication, compression and encryption.

Cohesity is well suited as target storage for IBM Spectrum Protect because:

• Cohesity provides a single and unified interface for provisioning, managing, and monitoring (low management overhead) target storage for IBM Spectrum Protect

• Variable-length, post-process or in-line, global deduplication. Cohesity even dedupes between multiple separate IBM Spectrum Protect servers/instances

• Multiple protocols to choose from

• Non-disruptive Cohesity hardware refresh and expansion without downtime

• Unlimited snapshots and clones on the Cohesity platform Global De-dup, even between multiple separate IBM Spectrum Protect servers/instances

3.

Component

IBM Spectrum Protect Server

IBM Spectrum Protect Client

Cohesity DataProtect

8.1.3 SUSE Linux Enterprise Server 11 SP4

8.1.2 CentOS Linux 7.2

4.1.2

Component Version OS Version

©2017 Cohesity, All Rights Reserved

IBM Spectrum Protect Overview1 IBM Spectrum Protect™ provides centralized, automated data protection that helps to reduce data loss and manage compliance with data retention and availability requirements.

• Data protection components

• The data protection solutions that IBM Spectrum Protect provides consist of a server, client systems and applications, and storage media. IBM Spectrum Protect provides management interfaces for monitoring and reporting the data protection status.

• Data protection services

• IBM Spectrum Protect provides data protection services to store and recover data from various types of clients. The data protection services are implemented through policies that are defined on the server. You can use client scheduling to automate the data protection services.

• Processes for managing data protection with IBM Spectrum Protect

• The IBM Spectrum Protect server inventory has a key role in the processes for data protection. You define policies that the server uses to manage data storage.

• User interfaces for the IBM Spectrum Protect environment

• For monitoring and configuration tasks, IBM Spectrum Protect provides various interfaces, including the Operations Center, a command-line interface, and an SQL administrative interface1.

IBM Tivoli Storage Manager (TSM), starting with version 7.1.3 is marketed as IBM Spectrum Protect

4.

Data Protection

Virtual Environments Physical Servers Databases

Cloud

DataProtect

DataPlatform

Hypervisor + Virtual SAN

Legacy Physical

RMAN

©2017 Cohesity, All Rights Reserved

Best Practices Provisioning Storage for IBM Spectrum Protect In order for IBM Spectrum Protect to leverage Cohesity as target storage, we’ll need to provision/present storage for IBM Spectrum Protect to use. Once the storage is presented to the IBM Spectrum Protect server OS, storage pools can then be created and used for storing backups from clients. In order to obtain the greatest throughput, all that is needed is to is spread the reads and writes across all nodes in the Cohesity cluster. As a Cohesity cluster size is increased, which is done simply by adding as many new nodes as needed, available raw and useable storage capacity as well as total available throughput is increased. You can add as few as one node or several if needed. IBM Spectrum Protect is able to leverage the power of the Cohesity platform by spreading it’s reads and writes among all the Cohesity nodes. Although IBM does not necessarily have a load balancing algorithm, volumes or files stored on Cohesity by IBM Spectrum Protect will be spread out across multiple mount points, one for each node via a VIP. So in the case of an IBM Spectrum Protect server with 4000 volumes and a Cohesity cluster with 4 nodes, roughly 1000 volumes will be read or written to per Cohesity node. This prevents bottlenecks that may present themselves with more traditional single or dual controller based storage. Creating the View Box and View/Share Create a suitable View Box, if the IBM Spectrum Protect server/instance is not doing de-dup and/or compression, enabled de-dup and compression on the Cohesity Viewbox for greatest space saving. Directory-container storage pools default to enable de-dup and compression. Cohesity can still further de-dup and compress data already de-dup’ed and compressed by a Directory-container storage pool to gain maximum space efficiency. This would be especially true where multiple IBM Spectrum Protect servers/instances are storing to the same View Box. If it’s desired for de-dup to happen between multiple IBM Spectrum Protect servers/instances, all the Views should be created within a single View Box with de-dup enabled. De-dup and compression does add some overhead and thus will reduce the throughput for reads and writes to Cohesity. If the highest throughput is desired, at the expense of space usage, both de-dup and compression can be disabled. However, if space efficiency is of a higher importance, it’s recommended to enable both compression and de-dup.

5.

IBM SpectrumProtect Clients

Logical Data Flow

IBM SpectrumProtect Servers

(Linux / AIX / Windows)

NFS / SMB / S3

Data FlowBackupRestore

The above diagram shows the logical data flow and relationship between IBM Spectrum Protect Clients, Server, and the Cohesity Cluster.

©2017 Cohesity, All Rights Reserved

Cohesity ViewCreate a suitable View, if IBM Spectrum Protect is running on Linux or AIX, choose NFS if IBM Spectrum Protect is running on Windows, choose SMB. Set appropriate white-lists. For security reasons, only the IBM Spectrum Protect servers that will be reading and writing to a view should be added to the view or global white list. One or more Views can be created for IBM Spectrum Protect servers/instances. Although not required, it may make sense to create one view per IBM Spectrum Protect storage pool or at the very least one per IBM Spectrum Protect server instance. Set the QoS as appropriate, for example Backup Target High. There are several QoS options, please refer to the Cohesity DataProtect User Guide for details on creating view boxes, views, setting white lists and to understand the different QoS settings. Cohesity DataProtect Documentation

ProtocolsAvailable Protocols include NFSv3, SMB7, & S37 IBM Spectrum Protect Storage PoolsStorage Pools are the logical groups used for storing backups, archives, or space-managed files within IBM Spectrum Protect. There are several types of storage pools, for the purpose of this document, we’ll focus on primary storage pools backed by NFS/SMB/S3 storage protocols.

IBM Spectrum Protect has several types of Primary storage pools. Below describes some of the different storage pools which are suitable for use with Cohesity via NFS/SMB/S3. Storage Pool Types2 Below is a table which talks about a few storage pool types, as described and documented by IBM.

6.

Storage pool type

Directory-container storage pool

Cloud-container storage pool

Sequential-access storage pool

A primary storage pool that a server uses to store data. Data that is stored in directory-con-tainer storage pools uses both inline data deduplication and client-side data deduplication.

A primary storage pool that a server uses to store data. Use cloud-container storage pools to store data to an object-store based cloud storage provider. Data that is stored in cloud-con-tainer storage pools uses both inline data deduplication and client-side data deduplication.

A set of volumes that the server uses to store backup versions of files, files that are archive copies, and files that are migrat-ed from client nodes. Files are stored on tape or FILE devices. Data that is stored in sequential-access storage pools uses both post-process and client-side data deduplication.

Use when you want to deduplicate data inline. By using directory-container storage pools, you remove the need for volume reclamation, which improves server performance and re-duces the cost of storage hardware.You cannot use this type of storage pool for storage pool backup, migration, reclamation, import or export operations.

By storing data in cloud-container storage pools, you can exploit the cost per unit advan-tages that clouds offer along with the scaling capabilities that cloud storage provides.You cannot use this type of storage pool for storage pool backup, migration, reclamation, encryption, import or export operations.

Use this type of storage pool to keep a copy of your data on TAPE devices. You can migrate data into this type of storage pool.

Description Uses

Container storage pools were first introduced in IBM Spectrum Protect 7.1.3 and provides in-line server-side deduplication and significant improvements in performance and scalability. The container storage pool was further enhanced in 7.1.5 to provide in-line storage pool compression which further enhances data reduction capabilities.3 Container storage pools have several advantages over the traditional storage pools. It’s recommended to use Directory-container storage pool(s) when using Cohesity as a target because of de-dup between multiple IBM Spectrum Protect instances. Sequential-access storage pools do appear to have an performance advantage when it comes to backup throughput, if the IBM Spectrum Protect server and NFS mount points are configured correctly, which is described in the sequential-access storage pool section below. Storage Pool Performance Comparison when using Cohesity

Mounting the NFS file systems and Configuring IBM Spectrum Protect Storage Pools Once the desired storage pool is determined, follow the appropriate section below to mount the NFS file system(s) and create the storage pool(s). Sequential-access storage pool (Device Class FILE) The section below describes and walks through an example of creating a new device class and sequential-access storage pool. Storage Pools defined as a Sequential-access storage pools (device class type FILE) that write to volumes over NFSv3 can do so without the filesystem being mounted with the sync option per IBM4. According to IBM’s support document, this can be done because of how IBM Spectrum Protect issues a standard sync() call to the OS before the metadata is committed to the IBM Spectrum Protect database. Additionally, DIRECTIO needs to be set to NO within the IBM Spectrum Protect server configuration file. If this is not done, write performance to Cohesity will be slow and as a result backups will be slow as well.

7.

Directory-container storage pool

Sequential-access storage pool6

Backup Speed/Throughput

Restore Speed/Throughput

Total De-Dup/Compression

GOOD

GOOD

VERY GOOD

VERY GOODVERY GOOD

5VERY GOOD

©2017 Cohesity, All Rights Reserved

The graph above shows the relative performance difference between having the NFSv3 shares mounted with sync and DIRECTIO=YES. With sync and DIRECTIO=YES, IBM Spectrum Protect writes directly to Cohesity without any buffering in 256KB block sizes, which ends up being very inefficient and causes significant latency and thus lower throughput. Cohesity recommends that the share be mounted without the sync option and with DIRECTIO set to NO when using the device class of FILE. IBM Spectrum Protect Configuration Option Add the DIRECTIO NO to dsmserv.opt file and restart the IBM Spectrum Protect instance. This can simply be added to the very bottom of the dsmserv.opt. The IBM Spectrum Protect instance will need to be restarted for this option to take effect.

8.

0

1

2

3

4

5

6

7

8

9

DirectIO = YES

Direct IO vs Bu�ered IORelative Throughput Performance of

Backup / Restore to a 4-Node Cohesity Cluster with IDD

Backup Restore

DirectIO = NO

©2017 Cohesity, All Rights Reserved

dsmserv.opt (Example Only)

$ cat dsmserv.optCOMMmethod TCPIP TCPport 1500 DEVCONFIG devconf.datVOLUMEHISTORY volhist.dat ... DIRECTIO NO

Verify the DIRECTIO option is set by logging into the IBM Spectrum Protect Administrators CLI

Mounting Options

9.©2017 Cohesity, All Rights Reserved

$ dsmadmc IBM Spectrum ProtectCommand Line Administrative Interface - Version X, Release X, Level X.X(c) Copyright by IBM Corporation and other(s) 1990, 2017. All Rights Reserved. Enter your user id: admin Enter your password: [Password] Session established with server IBMSPSRV: Linux/x86_64 Server Version X, Release X, Level X.XXX Server date/time: MM/DD/YY HH:MM:SS Last access: MM/DD/YY HH:MM:SS Protect: IBMSPSRV>q option directio Server Option Option Setting----------------- -------------------- DIRECTIO No

OS Mount Options

Linux

AIX

noatime,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard, intr,nolock noatime,vers=3,proto=tcp,rsize=524288,wsize=524288,hard,intr, nolock

Creating Mount Points Create an equal number of mount point directories as Cohesity nodes/VIPs, in this example we have a 4-node Cohesity cluster with 4 VIPs. These steps/commands are being done on the IBM Spectrum Protect server.

Create the mount points

$ sudo mkdir /tsminst1/Cohesity/FilePool1_1; sudo mkdir /tsminst1/Cohesity/FilePool1_2; sudo mkdir /tsminst1/Cohesity/FilePool1_3; sudo mkdir /tsminst1/Cohesity/FilePool1_4

10.

Creating the Device Class and Storage Pool Below shows creating a new device class that points to the mounted NFS file systems from the Cohesity cluster as well as the creating the storage pool, then querying the device class to verify its configuration.

©2017 Cohesity, All Rights Reserved

Mount the file systems

$ sudo mount -a

$ dsmadmc IBM Spectrum ProtectCommand Line Administrative Interface - Version X, Release X, Level X.X (c) Copyright by IBM Corporation and other(s) 1990, 2017. All Rights Reserved. Enter your user id: admin Enter your password: [Password] Session established with server IBMSPSRV: Linux/x86_64 Server Version X, Release X, Level X.XXX Server date/time: MM/DD/YY HH:MM:SS Last access: MM/DD/YY HH:MM:SS Protect: IBMSPSRV>def devclass fileclass1 devtype=file mountlimit=XXX maxcapacity=XXg directory=’/tsminst1/Cohesity/FilePool1_1,/tsminst1/Cohesity/FilePool1_1,/tsminst1/Co-hesity/FilePool1_1,/tsminst1/Cohesity/FilePool1_1’ Protect: IBMSPSRV>def stgpool filepool1 fileclass1 maxscratch=XXXXXXX Protect: IBMSPSRV>q devclass FILECLASS1... Device Access Strategy: Sequential... Device Type: FILE... Directory: /tsminst1/Cohesity/FilePool1_1,/tsminst1/Cohesity/File-Pool1_1,/tsminst1/Cohesity/FilePool1_1,/tsminst1/Cohesity/FilePool1_1

Add the new nfs mounts to fstab

fstab example

vip1.fqd:/IBMSP1-idd-filepool1 /tsminst1/Cohesity/FilePool1_1 nfs noatime,vers=3,pro-to=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0vip2.fqd:/IBMSP1-idd-filepool1 /tsminst1/Cohesity/FilePool1_2 nfs noatime,vers=3,pro-to=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0vip3.fqd:/IBMSP1-idd-filepool1 /tsminst1/Cohesity/FilePool1_3 nfs noatime,vers=3,pro-to=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0vip4.fqd:/IBMSP1-idd-filepool1 /tsminst1/Cohesity/FilePool1_4 nfs noatime,vers=3,pro-to=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0

Directory-container Storage poolThe section below describes and walks through an example of creating a new directory-container storage pool.

Mounting Options

11.©2017 Cohesity, All Rights Reserved

Creating Mount PointsCreate an equal number of mount point directories as Cohesity nodes/VIPs, in this example we have a 4-node Cohesity cluster with 4 VIPs. These steps/commands are being done on the IBM Spectrum Protect server.

OS Mount Options

Create the mount points

Mount the file systems

fstab example

Linux

$ sudo mkdir /tsminst1/Cohesity/Container1_1; sudo mkdir /tsminst1/Cohesity/Contain-er1_2; sudo mkdir /tsminst1/Cohesity/Container1_3; sudo mkdir /tsminst1/Cohesity/Con-tainer1_4

$ sudo mount -a

vip1.fqd:/IBMSP1-idd-containerpool1 /tsminst1/Cohesity/Container1_1 nfs sync,noa-time,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0vip2.fqd:/IBMSP1-idd-containerpool1 /tsminst1/Cohesity/Container1_2 nfs sync,noa-time,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0vip3.fqd:/IBMSP1-idd-containerpool1 /tsminst1/Cohesity/Container1_3 nfs sync,noa-time,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0vip4.fqd:/IBMSP1-idd-containerpool1 /tsminst1/Cohesity/Container1_4 nfs sync,noa-time,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock 0 0

AIX

sync,noatime,vers=3,proto=tcp,rsize=1048576,wsize=1048576,hard,intr,nolock

dio,noatime,vers=3,proto=tcp,rsize=524288,wsize=1048576,hard,intr,nolock

12.

Creating the Directory Container Storage Pool

©2017 Cohesity, All Rights Reserved

Create the Directory Container Storage Pool with Compression

$ dsmadmc IBM Spectrum ProtectCommand Line Administrative Interface - Version X, Release X, Level X.X(c) Copyright by IBM Corporation and other(s) 1990, 2017. All Rights Reserved. Enter your user id: admin Enter your password: [Password] Session established with server IBMSPSRV: Linux/x86_64 Server Version X, Release X, Level X.XXX Server date/time: MM/DD/YY HH:MM:SS Last access: MM/DD/YY HH:MM:SS Protect: IBMSPSRV>def stgpool contpool1 stgtype=directory compression=yesProtect: IBMSPSRV>def stgpooldirectory contpool1 ‘/tsminst1/Cohesity/Container1_1,/tsminst1/Cohesity/Container1_2,/tsminst1/Cohesity/Container1_3,/tsminst1/Cohesity/Container1_4’Protect: IBMSPSRV>q stgpooldir stgpool=contpool1 Storage Pool Name Directory Access ----------------- --------------------------------------------- ------------CONTPOOL1 /tsminst1/Cohesity/Container1_1 Read/Write CONTPOOL1 /tsminst1/Cohesity/Container1_2 Read/Write CONTPOOL1 /tsminst1/Cohesity/Container1_3 Read/Write CONTPOOL1 /tsminst1/Cohesity/Container1_4 Read/Write

13.

Backups and Restores Once the storage pool is associated with IBM Spectrum Protect nodes/clients, backups can be performed.

©2017 Cohesity, All Rights Reserved

Backup Example

$ sudo dsmc inc fileIBM Spectrum ProtectCommand Line Backup-Archive Client Interface Client Version X, Release X, Level X.X Client date/time: MM/DD/YYYY HH:MM:SS(c) Copyright by IBM Corporation and other(s) 1990, 2017. All Rights Reserved. Node Name: XXXXXXXSession established with server IBMSPSRV: Linux/x86_64 Server Version X, Release X, Level X.XXX Server date/time: MM/DD/YYYY HH:MM:SS Last access: MM/DD/YYYY HH:MM:SS Incremental backup of volume ‘file’Normal File--> 2,147,483,648 file [Sent]Successful incremental backup of ‘file’ Total number of objects inspected: 1Total number of objects backed up: 1Total number of objects updated: 0Total number of objects rebound: 0Total number of objects deleted: 0Total number of objects expired: 0Total number of objects failed: 0Total number of objects encrypted: 0Total number of objects grew: 0Total number of retries: 0Total number of bytes inspected: 2.00 GBTotal number of bytes transferred: 2.00 GB...Elapsed processing time: HH:MM:SS

14.

About the Author Justin Willoughby is 20-year IT veteran, currently working for Cohesity as a Solutions Engineer. In this role, Justin architects, builds, tests, and validates business-critical applications, databases, and virtualization solutions with Cohesity’s DataProtect platform. Version HistoryVersion | Date | Document Version History

Version 1.0 December 2017 Original Document

©2017 Cohesity, All Rights Reserved

Restore Example

$ sudo rm file$ sudo dsmc rest fileIBM Spectrum ProtectCommand Line Backup-Archive Client Interface Client Version X, Release X, Level X.X Client date/time: MM/DD/YYYY HH:MM:SS(c) Copyright by IBM Corporation and other(s) 1990, 2017. All Rights Reserved. Node Name: XXXXXXXSession established with server IBMSPSRV: Linux/x86_64 Server Version X, Release X, Level X.XXX Server date/time: MM/DD/YYYY HH:MM:SS Last access: MM/DD/YYYY HH:MM:SSRestore function invoked. Restoring 2,147,483,648 file [Done] Restore processing finished. Total number of objects restored: 1Total number of objects failed: 0Total number of bytes transferred: 2.00 GB...Elapsed processing time: HH:MM:SS

15.

References1 IBM Spectrum Protect concepts > IBM Spectrum Protect overview, IBM Knowledge Center

2 Servers > Configuring storage > Storage pool types, IBM Knowledge Center

3 Tivoli Storage Manager Deduplication FAQ, IBM developerWorks

4 Considerations for using the NFS V3 protocol for an IBM Spectrum Protect storage pool, IBM Support

Other Notes5 IBM Spectrum Protect De-Dup/Compression + Cohesity De-Dup/Compression

6 When Cohesity Views are mounted without the sync option and DIRECTIO is set to NO within the IBM Spectrum Protect server

7 SMB and S3 has not yet been tested/validated

TrademarksIBM Spectrum Protect is a registered trademark of IBM Corporation in the United States or other countries or both.

Cohesity, Inc. Address 300 Park Ave., Suite 300, San Jose, CA 95110Email [email protected] www.cohesity.com ©2018 Cohesity. All Rights Reserved.

@cohesity

IBM1162018