copy on writesnapshotusersguide

160
MK-97MDF8124-00 Hitachi Adaptable Modular Storage Copy-on-write SnapShot User’s Guide Preliminary

Upload: diego-gonzalez

Post on 24-Oct-2014

62 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Copy on WriteSnapShotUsersGuide

MK-97MDF8124-00

Hitachi Adaptable Modular Storage Copy-on-write SnapShot User’s Guide

Prelim

inary

Page 2: Copy on WriteSnapShotUsersGuide

Prelim

inary

Page 3: Copy on WriteSnapShotUsersGuide

iii

Copyright © 2010 Hitachi Ltd., Hitachi Data Systems Corporation, ALL RIGHTS RESERVED

No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written permission of Hitachi, Ltd. and Hitachi Data Systems Corporation (hereinafter referred to as “Hitachi”).

Hitachi, Ltd. and Hitachi Data Systems reserve the right to make changes to this document at any time without notice and assume no responsibility for its use. Hitachi, Ltd. and Hitachi Data Systems products and services can only be ordered under the terms and conditions of Hitachi Data Systems' applicable agreements.

All of the features described in this document may not be currently available. Refer to the most recent product announcement or contact your local Hitachi Data Systems sales office for information on feature and product availability.

Notice: Hitachi Data Systems products and services can be ordered only under the terms and conditions of Hitachi Data Systems’ applicable agreement(s). The use of Hitachi Data Systems products is governed by the terms of your agreement(s) with Hitachi Data Systems.

Trademarks

Hitachi is a registered trademark and service mark of Hitachi, Ltd., and the Hitachi design mark is a trademark and service mark of Hitachi, Ltd.

Microsoft, Windows, and Windows Server are registered trademarks or trademarks of Microsoft Corporation.

UNIX is a registered trademark of X/Open Company Limited in the United States and other countries and is licensed exclusively through X/Open Company Limited.

All other brand or product names are or may be registered trademarks, trademarks or service marks of and are used to identify products or services of their respective owners.

Notice of Export Controls

Export of technical data contained in this document may require an export license from the United States government and/or the government of Japan. Contact the Hitachi Legal Department for any export compliance questions.

Document Revision Level

Revision Date Description

ICS-97MDF8124-00 July 2010 Draft, Early External Evaluation Purposes Only

MK-97MDF8124-00 October 2010 Supersedes and replaces ICS-97MDF8124-00

Prelim

inary

Page 4: Copy on WriteSnapShotUsersGuide

iv

Preface

The Copy-on-write SnapShot User’s Guide describes and provides instructions for performing SnapShot operations on the AMS array using the SnapShot software.

Note:

Throughout this manual, the term “SnapShot” refers to the Copy-on-write SnapShot.

Throughout this manual, the term “ShadowImage” refers to the ShadowImage in-system replication.

Throughout this manual, the term “Volume Migration” refers to the Modular Volume Migration.

Throughout this manual, the term “TrueCopy” refers to the TrueCopy remote replication.

Throughout this manual, the term “TCE” refers to the TrueCopy Extended Distance.

This user’s guide assumes the following:

The user has a background in data processing and understands RAID storage arrays and their basic functions.

The user is familiar with the AMS array.

The user has read and understands the Command Control Interface (CCI) Reference Guide.

The user is familiar with Windows 2000, Windows Server 2003, and/or Windows Server 2008 operating system.

Prelim

inary

Page 5: Copy on WriteSnapShotUsersGuide

v

Contents

Chapter 1 Overview of Hitachi Adaptable Modular Storage Copy Solutions......................................1

Chapter 2 Overview of SnapShot............................................................................................................3

2.1 Typical SnapShot System Environment........................................................ 4 2.2 SnapShot Components ........................................................................... 5

2.2.1 SnapShot Volume Pairs (P-VOLs and V-VOLs) ....................................... 5 2.2.2 SnapShot Data Pools .................................................................... 6 2.2.3 Consistency Group (CTG)............................................................... 6 2.2.4 Differential Management LUs.......................................................... 6 2.2.5 Command Devices ....................................................................... 7

2.3 SnapShot Features................................................................................ 8 2.3.1 Differential Data......................................................................... 8 2.3.2 Redundancy .............................................................................. 8

2.4 SnapShot Functional Overview ............................................................... 12 2.4.1 SnapShot Operations .................................................................. 12

2.4.1.1 Pair Creation ................................................................ 13 2.4.1.2 Updating V-VOL............................................................. 13 2.4.1.3 Restoration .................................................................. 14 2.4.1.4 Pair Failures................................................................. 16 2.4.1.5 Pair Deleting ................................................................ 17

2.4.2 Pair Status .............................................................................. 18 2.5 Cascade Connection of SnapShot with TrueCopy.......................................... 20

2.5.1 Cascade Restrictions with P-VOL of SnapShot .................................... 21 2.5.2 Cascade Restrictions with V-VOL of SnapShot .................................... 22 2.5.3 Restrictions Configuration on the Cascade of TrueCopy with SnapShot ...... 24 2.5.4 Cascade Restrictions with Data Pool of SnapShot ................................ 24

2.6 Cascade Connection of SnapShot with TCE................................................. 25 2.6.1 Cascade Restrictions with P-VOL of TCE........................................... 26 2.6.2 Cascade Restrictions with S-VOL of TCE ........................................... 27

Chapter 3 SnapShot Requirements......................................................................................................29

3.1 System Requirements .......................................................................... 30 3.1.1 SnapShot Requirements .............................................................. 30

3.2 Management Software ......................................................................... 32 3.2.1 Navigator 2 ............................................................................. 32 3.2.2 Command Control Interface ......................................................... 32

3.3 Supported Capacity ............................................................................ 33 3.3.1 Maximum Supported Capacity of P-VOL and Data Pool for Each Cache Memory

Capacity................................................................................. 33 3.3.2 Maximum Supported Capacity of Concurrent Use of Other Copy Functions.. 39

Chapter 4 Setting Up Replication System............................................................................................41

4.1 Recommendations .............................................................................. 42 4.1.1 Pair Assignment........................................................................ 42 4.1.2 Locating P-VOLs and Data Pools..................................................... 43

Prelim

inary

Page 6: Copy on WriteSnapShotUsersGuide

vi

4.1.3 P-VOLs and Data Pools in a RAID Configuration ................................... 45 4.1.4 Command Devices...................................................................... 45 4.1.5 Differential Management LUs ........................................................ 46 4.1.6 LU Ownership of P-VOLs and Data Pools ........................................... 46

4.2 Determining Data Pool Capacity .............................................................. 48 4.3 Cautions and Restrictions ...................................................................... 51

4.3.1 Specifying P-VOL and V-VOL when Pair Operation................................ 51 4.3.2 LU Mapping and SnapShot Configuration ........................................... 52 4.3.3 Cluster Software, Path Switching Software, and SnapShot Configuration .... 52 4.3.4 MSCS and SnapShot Configuration................................................... 52 4.3.5 AIX and SnapShot Configuration ..................................................... 52 4.3.6 VxVM and SnapShot Configuration................................................... 52 4.3.7 Windows 2000 and SnapShot Configuration........................................ 52 4.3.8 Windows Server 2003/Windows Server 2008 and SnapShot Configuration .... 53 4.3.9 Linux and LVM Configuration......................................................... 54 4.3.10 Tru64 UNIX and SnapShot Configuration ........................................... 54 4.3.11 Windows Server 2008/Windows Server 2003/Windows 2000 and Dynamic Disk54 4.3.12 VMWare and SnapShot Configuration ............................................... 54 4.3.13 Concurrent Use of Cache Partition Manager....................................... 55 4.3.14 Concurrent Use of Dynamic Provisioning ........................................... 55 4.3.15 User Data Area of Cache Memory ................................................... 59

4.4 Installing and Uninstalling SnapShot ......................................................... 62 4.4.1 Installing SnapShot..................................................................... 63 4.4.2 Uninstalling SnapShot ................................................................. 65 4.4.3 Enabling or Disabling SnapShot ...................................................... 67

4.5 Operations for SnapShot Configuration...................................................... 70 4.5.1 Setting the DMLU....................................................................... 70 4.5.2 Setting Data Pool Volumes ........................................................... 71 4.5.3 Editing Data Pool Volumes............................................................ 73 4.5.4 Deleting Data Pool Volumes .......................................................... 73 4.5.5 Setting V-VOLs.......................................................................... 74 4.5.6 Deleting V-VOLs ........................................................................ 74 4.5.7 Setting the LU Ownership............................................................. 75

Chapter 5 Performing SnapShot GUI Operations ...............................................................................77

5.1 Operations Workflow ........................................................................... 78 5.2 Pair Operations .................................................................................. 79

5.2.1 Confirming Pair Status ................................................................ 79 5.2.2 Creating Pairs........................................................................... 80 5.2.3 Updating the V-VOL.................................................................... 82 5.2.4 Restoring the V-VOL to the P-VOL................................................... 83 5.2.5 Releasing Pairs ......................................................................... 83 5.2.6 Changing Pair Information............................................................ 84 5.2.7 Creating Pairs that Belong to a Group.............................................. 85

Chapter 6 System Operation Example.................................................................................................87

6.1 Backup Operation for Quick Recovery ....................................................... 88 6.2 Online Backup Operation Using an Inexpensive Configuration........................... 89 6.3 Restoring Backup Data ......................................................................... 90

Prelim

inary

Page 7: Copy on WriteSnapShotUsersGuide

vii

6.3.1 Backup Operation for Quick Recovery ............................................. 91 6.3.2 Restoration Backup Data from a Tape Device .................................... 92

Chapter 7 Operations Using CLI ...........................................................................................................95

7.1 Installing SnapShot ............................................................................. 96 7.1.1 Installing SnapShot .................................................................... 96 7.1.2 Uninstalling SnapShot................................................................. 98 7.1.3 Enabling or Disabling SnapShot...................................................... 99

7.2 Operations for SnapShot Configuration .................................................... 101 7.2.1 Setting the DMLU ..................................................................... 101 7.2.2 Setting the Data Pool ................................................................ 101 7.2.3 Setting the V-VOL .................................................................... 103 7.2.4 Setting the LU Ownership ........................................................... 104

7.3 Performing SnapShot CLI Operations ....................................................... 105 7.3.1 Creating SnapShot Pairs ............................................................. 105 7.3.2 Updating SnapShot Logical Unit.................................................... 106 7.3.3 Restoring V-VOL to P-VOL........................................................... 107 7.3.4 Releasing SnapShot Pairs ............................................................ 108 7.3.5 Changing Pair Information .......................................................... 109 7.3.6 Creating Pairs that Belong to a Group ............................................ 110

7.4 Applications of CLI Commands .............................................................. 111

Chapter 8 Operations Using CCI ........................................................................................................113

8.1 Preparing for CCI Operations ................................................................ 114 8.1.1 Setting the Command Device....................................................... 114 8.1.2 Setting Mapping Information ....................................................... 116

8.2 Creating the Configuration Definition File ................................................ 117 8.3 Setting the Environment Variable........................................................... 120 8.4 Performing SnapShot Operations............................................................ 121

8.4.1 Confirming Pair Status............................................................... 122 8.4.2 Paircreate Operation ................................................................ 122 8.4.3 Updating the V-VOL .................................................................. 124 8.4.4 Restoring a V-VOL to the P-VOL.................................................... 125 8.4.5 Releasing SnapShot Pairs ............................................................ 126

8.5 Note about Confirm Pairs by Navigator 2 .................................................. 127

Chapter 9 System Monitoring and Maintenance ...............................................................................129

9.1 Monitoring of Pair Failure .................................................................... 130 9.2 Monitoring of Data Pool Usage............................................................... 132

Chapter 10 Troubleshooting .................................................................................................................135

10.1 Troubleshooting................................................................................ 136 10.1.1 Pair Failure ............................................................................ 136 10.1.2 Data Pool Capacity Exceeds Threshold Value.................................... 139 10.1.3 Cases and Solutions Using the DP-VOLs ........................................... 139

Appendix A SnapShot Specifications....................................................................................................141

A.1 External Specifications ....................................................................... 141

Prelim

inary

Page 8: Copy on WriteSnapShotUsersGuide

viii

Appendix B Installing SnapShot when Cache Partition Manager is Being Used ..............................145

Index .............................................................................................................................................147

Prelim

inary

Page 9: Copy on WriteSnapShotUsersGuide

ix

List of Figures Figure 2.1 SnapShot Components ..................................................................... 5 Figure 2.2 Differential Data ........................................................................... 8 Figure 2.3 P-VOL Failures .............................................................................. 9 Figure 2.4 Data Pool (S-VOL) Failures .............................................................. 10 Figure 2.5 Data Pool (S-VOL) Failures during Restore Operation .............................. 11 Figure 2.6 Creating a SnapShot Pair ................................................................ 13 Figure 2.7 Operation Example when SnapShot Operation is Performed to the Other V-VOL

During the Restoration .................................................................. 15 Figure 2.8 SnapShot Pair Status Transitions....................................................... 18 Figure 2.9 Cascade Connection of SnapShot with TrueCopy.................................... 20 Figure 2.10 Restrictions Configuration on the Cascade of TrueCopy with SnapShot ......... 24 Figure 2.11 Cascade Connection of SnapShot with TCE........................................... 25 Figure 5.1 Pair Operations ........................................................................... 78 Figure 6.1 Ordinarily Quick Recovery Operation ................................................. 88 Figure 6.2 Ordinarily Operation ..................................................................... 89 Figure 8.1 SnapShot Pair Status Transitions...................................................... 121 Figure 10.1 Pair Status Information Example Using SnapShot .................................. 137 Figure B.1 When Cache Partition Manager is Used .............................................. 145 Figure B.2 Where SnapShot is Installed while Cache Partition Manager is Used............ 146

Prelim

inary

Page 10: Copy on WriteSnapShotUsersGuide

x

List of Tables Table 1.1 SnapShot and ShadowImage Functions ................................................. 2 Table 2.1 SnapShot Functions ....................................................................... 12 Table 2.2 Time Required to Examine Differential ............................................... 16 Table 2.3 SnapShot Pair Status...................................................................... 19 Table 2.4 A Read/Write Instruction to a P-VOL of SnapShot on the Local Side (TrueCopy)21 Table 2.5 A Read/Write Instruction to a P-VOL of SnapShot on the Remote Side

(TrueCopy) ................................................................................. 22 Table 2.6 A Read/Write Instruction to a V-VOL of SnapShot on the Local Side (TrueCopy)23 Table 2.7 A Read/Write Instruction to a P-VOL of SnapShot on the Local Side (TCE)...... 26 Table 2.8 A Read/Write Instruction to a P-VOL of SnapShot on the Remote Side (TCE)... 27 Table 3.1 Environments and Requirements of SnapShot ........................................ 30 Table 3.2 Formula for Calculating Maximum Supported Capacity Value for P-VOL/Data

Pool (AMS2100) ............................................................................ 33 Table 3.3 Formula for Calculating Maximum Supported Capacity Value for P-VOL/Data

Pool (AMS2300) ............................................................................ 33 Table 3.4 Formula for Calculating Maximum Supported Capacity Value for P-VOL/Data

Pool (AMS2500) ............................................................................ 34 Table 3.5 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 2

GB/CTL: AMS2100)........................................................................ 34 Table 3.6 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 4

GB/CTL: AMS2100)........................................................................ 34 Table 3.7 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 2

GB/CTL: AMS2300)........................................................................ 35 Table 3.8 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 4

GB/CTL: AMS2300)........................................................................ 35 Table 3.9 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 8

GB/CTL: AMS2300)........................................................................ 35 Table 3.10 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 2

GB/CTL: AMS2500)........................................................................ 35 Table 3.11 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 4

GB/CTL: AMS2500)........................................................................ 35 Table 3.12 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 6

GB/CTL: AMS2500)........................................................................ 36 Table 3.13 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 8

GB/CTL: AMS2500)........................................................................ 36 Table 3.14 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 10

GB/CTL: AMS2500)........................................................................ 36 Table 3.15 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 12

GB/CTL: AMS2500)........................................................................ 36 Table 3.16 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 16

GB/CTL: AMS2500)........................................................................ 36 Table 3.17 Single Maximum Capacity of SnapShot (TB) .......................................... 39 Table 4.1 P-VOL and Data Pool RAID Configuration.............................................. 45 Table 4.2 Recommended Value of the Data Pool Capacity (When the P-VOL Capacity is 1

TB) .......................................................................................... 49 Table 4.3 Combination of a DP-VOL and a Normal LU........................................... 56 Table 4.4 Pair Statuses before the DP Pool Capacity Depletion and Pair Statuses after the

DP Pool Capacity Depletion ............................................................. 57

Prelim

inary

Page 11: Copy on WriteSnapShotUsersGuide

xi

Table 4.5 DP Pool Statuses and Availability of SnapShot Pair Operation .................... 57 Table 4.6 Supported Capacity of the Regular Capacity Mode ................................. 59 Table 4.7 Supported Capacity of the Regular Capacity Mode ................................. 60 Table 4.8 Supported Capacity of the Maximum Capacity Mode ............................... 61 Table 8.1 Pair Status ................................................................................ 122 Table 9.1 Pair Failure Results ..................................................................... 130 Table 9.2 CCI System Log Message................................................................ 130 Table 10.1 Operational Notes for SnapShot Operations......................................... 138 Table 10.2 Data Assurance and the Method for Recovering the Pair ......................... 138 Table 10.3 Cases and Solutions Using the DP-VOLs .............................................. 139 Table A.1 External Specifications ................................................................. 141

Prelim

inary

Page 12: Copy on WriteSnapShotUsersGuide

xii

Prelim

inary

Page 13: Copy on WriteSnapShotUsersGuide

1

Chapter 1 Overview of Hitachi Adaptable Modular Storage Copy Solutions

Hitachi adaptable modular storage copy solutions are constitutes ShadowImage and SnapShot as within a disk array copy solution.

Array

SAN

Host

LAN

Array

ShadowImage or SnapShot

ShadowImage or SnapShot

Hitachi Storage Navigator Modular 2

Prelim

inary

Page 14: Copy on WriteSnapShotUsersGuide

2

SnapShot and ShadowImage both create a duplication within a disk array; however, their uses are different because their external specifications and data assurance measures vary from each other. Advantages, disadvantages, and the functions of SnapShot and ShadowImage are shown below.

Table 1.1 SnapShot and ShadowImage Functions

Contents SnapShot ShadowImage

Advantages • Amount of physical data to be used for the V-VOL is small because only the differential data is managed.

• Up to 32 V-VOLs per P-VOL can be created. • The data pool can be used by the two or more

P-VOLs and the same number of V-VOLs by sharing between them; unitary management of its capacity can be done.

• A pair creation/resynchronization is completed in a moment.

• When a hardware failure occurs in the P-VOL, it has no effect on the S-VOL.

• When a failure occurs in the S-VOL, it has no effect on the other S-VOLs.

• Access performance is only slightly lowered in comparison with ordinary cases because the P-VOL and S-VOL are independent LUs.

Disadvantages

• Due to a hardware failure in the P-VOL, all the V-VOLs correlated to the P-VOL, in which the failure has occurred, are placed in the Failure status.

• Due to a hardware failure or a shortage of the data pool capacity in the data pool, all the V-VOLs that use the data pool, in which the failure has occurred, are placed in the Failure status.

• Data of the V-VOL, which has been placed in the Failure status, cannot be restored.

• When the P-VOL is accessed, the access performance is lowered in comparison with ordinary cases because data is copied to the data pool.

• When the V-VOL is accessed, the performance regarding access to the data pool is lowered because the V-VOL data is shared among the P-VOL and data pool.

• Eight S-VOLs can be created per P-VOL. • The S-VOL must have the same capacity as the

P-VOL. • A pair creation/resynchronization requires time

because it accompanies data copying from the P-VOL to S-VOL.

Uses Backup for quick recovery To make a restoration quickly when software failure occurs, managing multiple backups (for example, by making backups every several hours and managing them according to their generations). It is important to backup onto a tape device due to low redundancy.

Online backup Store backup data in a tape device when I/O operations are few (at night, for instance) with a disk capacity as small as possible.

Backup for quick recovery Not recommended

Online backup Recommended When many I/O operations are required at night or an amount of data to be backed up is too large to be disposed during the night.

Prelim

inary

Page 15: Copy on WriteSnapShotUsersGuide

3

Chapter 2 Overview of SnapShot

This chapter presents an overview of SnapShot software and discusses the functional and operational details of the product. This chapter includes the following sections:

Typical SnapShot System Environment (see section 2.1)

SnapShot Components (see section 2.2)

SnapShot Features (see section 2.3)

SnapShot Functional Overview (see section 2.4)

Cascade Connection of SnapShot with TrueCopy (see section 2.5)

Cascade Connection of SnapShot with TCE (see section 2.6)

Prelim

inary

Page 16: Copy on WriteSnapShotUsersGuide

4

2.1 Typical SnapShot System Environment

A typical SnapShot hardware configuration includes an AMS array, a host connected to the AMS array and a management host. The host is connected to the AMS array via fibre channel connections. The management host is connected to the AMS arrays via a management LAN.

The logical configuration of the AMS array includes a command device, a differential management logical unit (DMLU), primary data volumes (P-VOLs) belonging to the same consistency group (CTG), virtual volumes (V-VOLs) and a logical unit for the data pool. SnapShot creates a volume pair from a primary volume (P-VOL), which contains the original data, and a SnapShot Image (V-VOL), which contains the snapshot data. SnapShot uses the V-VOL as the secondary volume (S-VOL) of the volume pair. Since each P-VOL is paired with its V-VOL independently, each volume can be maintained as an independent copy set.

In addition to the above components, the SnapShot system architecture also includes Hitachi Storage Navigator Modular 2 (called Navigator 2 hereafter) software and the Command Control Interface (called CCI hereafter) software. Navigator 2 is installed on the management host and is used to configure the SnapShot system environment and manages SnapShot volume pair operations. CCI software is installed on the host and manages SnapShot volume pair operations.

Prelim

inary

Page 17: Copy on WriteSnapShotUsersGuide

5

2.2 SnapShot Components

SnapShot operations involve the primary volumes (P-VOLs), SnapShot Images (V-VOL), data pool in the AMS array, Navigator 2, and CCI. Figure 2.1 shows a typical SnapShot configuration. The SnapShot system components include:

SnapShot Volume Pairs (P-VOLs and V-VOLs) (see section 2.2.1)

SnapShot Data Pools (see section 2.2.2)

Consistency Group (CTG) (see section 2.2.3)

Differential Management LUs (see section 2.2.4)

Command Devices (see section 2.2.5)

Figure 2.1 SnapShot Components

Host

PC

Navigator

2 SnapShot

Volume Pairs Snapshot Splitting

Restoration

P-VOL: Primary Volume V-VOL: Snapshot Image R/W: Read/Write DMLU: Differential Management LU

Array

CCI

Controlling SnapShot (paircreate, pairsplit, etc.)

Controlling/Setting SnapShot (paircreate,

pairsplit, etc. and specifying/releasing

command devices, data pools, and V-VOLs)

Command Device

Data R/W

P-VOLData pool V-VOL

V-VOL

Data R/W

Physical dataPhysical data

(Differential data)Logical data

Logical data DMLU

2.2.1 SnapShot Volume Pairs (P-VOLs and V-VOLs)

AMS array manages both LUs having original data and those data copied at the time of an issue of the SnapShot instruction. The LU having original data is called a P-VOL and that having data copied at the time of an issue of the SnapShot instruction is called a V-VOL. These LUs exist in the same AMS array. A set of a P-VOL and a V-VOL is called a SnapShot pair. One P-VOL can pair with up to 32 V-VOLs and when one P-VOL pairs with 32 V-VOLs, the number of pairs is 32. AMS2100 supports up to 1,022 SnapShot pairs (AMS2300/2500 2,046 pairs).

A V-VOL does not have data only by the creation. The V-VOL can have the copied data when a pair is created through a specification of an optional V-VOL and a P-VOL which have not been paired yet and the SnapShot instruction is issued to the pair that has been created.

Prelim

inary

Page 18: Copy on WriteSnapShotUsersGuide

6

2.2.2 SnapShot Data Pools

A V-VOL is a virtual LU that does not actually have disk capacity. In order to make the V-VOL retain data at the time when the SnapShot instruction is issued, it is required to save the P-VOL data before it is overwritten by the Write command. The saved data is called differential data and an area that stores the differential data is called a data pool.

Up to 64 data pools can be created per AMS array and a pool to be used by a certain P-VOL is specified when a pair is created. A data pool to be used can be specified for each P-VOL and V-VOLs that pair with the same P-VOL must use a common data pool. Two or more SnapShot pairs can share a single data pool.

When only one data pool is used, the operation performance is limited because the same controller controls all the P-VOLs. Therefore, we firmly recommend you create two or more data pools.

2.2.3 Consistency Group (CTG)

There is a case where a file system that stores the application data or a logical volume on an OS is configured with two or more logical units. In this case, it is required to assure that the data of those logical units are of the same time. In order to assure that the data of two or more SnapShot Images are of the same time and to manage them as a group, the AMS array provides the consistency group (CTG). The AMS array assures that data of the SnapShot Images in the group are of the same time by splitting the pairs in the group using the CTG. When the CTG is not used, the timing to split each SnapShot Image varies because the pairs in the group are split in order. Therefore, if an I/O instruction is received from a host in the middle of the splitting of the pairs in the group, data of the different time is stored in each SnapShot Image.

To use the CTG when splitting a pair, you must create the pair and specify the New or exiting Group Number or Existing Group Name option at the time of the pair creation.

2.2.4 Differential Management LUs

The Differential Management LU is an exclusive logical unit for storing the differential data at the time when the volume is copied. The Differential Management LU in the AMS array is treated in the same way as the other logical units. However, a logical unit that is set as the Differential Management LU is not recognized by a host (it is hidden).

Set a logical unit with a size of 10 GB minimum as the Differential Management LU. Up to the two Differential Management LUs can be set. The second one is used for the mirroring. It is recommended that two Differential Management LUs are set. Prel

imina

ry

Page 19: Copy on WriteSnapShotUsersGuide

7

2.2.5 Command Devices

The command device is a user-selected, dedicated logical volume on the AMS array, which functions as the interface to the CCI software. SnapShot commands are issued by CCI (HORCM) to the AMS array command device.

A command device must be designated in order to issue SnapShot commands. The command device must be defined in the HORCM_CMD section of the configuration definition file for the CCI instance on the attached host. 128 command devices can be designated for the AMS array. You can designate command devices using Navigator 2.

Note: LUs set for command devices must be recognized by the host. The command device LU size must be greater than or equal to 33 MBs.

Prelim

inary

Page 20: Copy on WriteSnapShotUsersGuide

8

2.3 SnapShot Features

2.3.1 Differential Data

When a SnapShot operation is performed, the V-VOL is assured through management of differential data (locations of Write data from a host contained in the P-VOL and V-VOL) and reference to the bit map performed in a Read operation by the host (Figure 2.2 provides differential data). The extent of one bit in the bit map is equivalent to 64 kB. Therefore:

Even an update of a single kB update requires a data transfer as large as 64 kB to copy from the P-VOL to the data pool.

The copy amount becomes less when the locality of the host access pattern is high.

Figure 2.2 Differential Data

High Locality Low Locality

Copy range Copy range

: Updated data

Fewer amounts to copy.

The time required for access to avolume is short.

More amounts to copy.

The time required for access to a volume is long.

: Updated data

2.3.2 Redundancy

SnapShot and ShadowImage are identical functions from the viewpoint of producing a duplicate within an AMS array. However, the duplicated volume (S-VOL) of ShadowImage is a copy of the entire P-VOL data to a single LU; the duplicated volume (V-VOL) of SnapShot consists of the P-VOL data and differential data saved in the data pool. Therefore, when a hardware failure, such as a double failure of drives occurs in the P-VOL, a similar failure also occurs in the V-VOL and the pair status is changed to Failure (see section 2.4.2). Prel

imina

ry

Page 21: Copy on WriteSnapShotUsersGuide

9

The data pool can be used by two or more P-VOLs and V-VOLs who share them. However, when a hardware failure occurs in the data pool (such as a double failure of drives), similar failures occur in all the V-VOLs that use the data pool and their pair statuses are changed to Failure. When the data pool capacity is insufficient, all the V-VOLs which use the data pool are placed in the Failure status because the differential data cannot be saved and the pair cannot be maintained. When the V-VOL is placed in the Failure status, data retained in the V-VOL cannot be restored.

When hardware failures occur in the data pool and S-VOL during a restoration for both SnapShot and ShadowImage, the P-VOL being restored accepts no Read/Write instruction. The difference between SnapShot and ShadowImage in redundancy is shown below.

P-VOL failures

Figure 2.3 P-VOL Failures

P-VOL

V-VOL

[SnapShot] [ShadowImage]

A double failure of drives, etc. occurred.

Data pool

V-VOL

V-VOL

The V-VOL refers to the P-VOL data; therefore, the P-VOL failure affects all the V-VOLs that refer to the P-VOL.

Paired P-VOL S-VOL

A douple failure of drives, etc. occurred.

The entire P-VOL data is copied to the S-VOL; therefore, the P-VOL failure has no effect.

Paired

Prelim

inary

Page 22: Copy on WriteSnapShotUsersGuide

10

Data pool (S-VOL) failures

Figure 2.4 Data Pool (S-VOL) Failures

P-VOL

V-VOL

[SnapShot] [ShadowImage]

Doule failure of drives, etc. or the data pool capacity is insufficient occurred.

Data pool

V-VOL

V-VOL

The V-VOL refers to the differential data shared in the data pool; therefore, the data pool failure affects all the V-VOLs that refer to the differential data.

Paired P-VOL S-VOL

Doule failure of drives, etc. occurred.

Each S-VOL is an independent LU; therefore, the P-VOL failure affects nothing except the S-VOL inwhich the failure occurred.

Paired

P-VOL S-VOLPaired

P-VOL S-VOLPaired

V-VOL

V-VOL

P-VOL Paired

Prelim

inary

Page 23: Copy on WriteSnapShotUsersGuide

11

Data pool (S-VOL) failures during a restoration

Figure 2.5 Data Pool (S-VOL) Failures during Restore Operation

P-VOL

V-VOL

[SnapShot] [ShadowImage]

Doule failure of drives, etc. occurred.

Data pool

V-VOL

V-VOL

When a failure occurs in the data pool during a restore operation, no Read/Write instruction is accepted by the V-VOLs which refer to the data pool, or by the P-VOL being restored.

Paired P-VOL S-VOL

Doule failure of drives, etc. occurred.

When a failure occurs in the S-VOL during a restore operation, the P-VOL being restored accepts no Read/Write instruction.

Paired

P-VOL S-VOLPaired

P-VOL S-VOLPaired

V-VOL

V-VOL

P-VOL Paired

RestorationRestoration

Prelim

inary

Page 24: Copy on WriteSnapShotUsersGuide

12

2.4 SnapShot Functional Overview

Table 2.1 lists and describes SnapShot functions.

Table 2.1 SnapShot Functions

Function Contents

Pair creation Supported. For P-VOL: Read/Write For V-VOL: Read only

Updating V-VOL data Supported. For P-VOL: Read/Write For V-VOL: Read/Write

Restoration Supported. For P-VOL: Read/Write For V-VOL: Does not accept I/O operations (Read/Write)

Pair releasing Supported.

2.4.1 SnapShot Operations

Prelim

inary

Page 25: Copy on WriteSnapShotUsersGuide

13

2.4.1.1 Pair Creation

Use the button to create SnapShot pair(s). A SnapShot pair is created the moment button is pushed and the pair status becomes Paired. When pushing the

button, a volume assigned to the V-VOL must be in the Simplex status.

When the button is pushed with the Split the pair immediately after creation is completed option specified to it, the pair status becomes Split.

Figure 2.6 Creating a SnapShot Pair

Any duplex pair can be split.

P-VOL V-VOL

SimplexPairedSplit

P-VOL V-VOL

2.4.1.2 Updating V-VOL

When executing the SnapShot instruction (a data update) to the created SnapShot pair, change the pair status from Split to Paired using the button; then change it to Split using the button. The P-VOL data that exists when the button is pushed is retained in the V-VOL in the same way as when the button is pushed with the Split the pair immediately after creation is completed option specified.

Note: Differential data stored in the data pool is deleted when the pair status is changed to Paired using the button or to Simplex using the button. The deletion of differential data is not completed immediately after the pair status is changed to Paired or Simplex; it is completed after a few moments. Time required for the deletion process is proportional to the P-VOL capacity. As a standard, it takes about five minutes with the pair configuration of 1:1 and about 15 minutes with the pair configuration of 1:32 for a 100 GB P-VOL. Prel

imina

ry

Page 26: Copy on WriteSnapShotUsersGuide

14

2.4.1.3 Restoration

The button is used when restoring backup data retained in the V-VOL to the P-VOL. When a restoration instruction is given, the P-VOL data is immediately replaced with the backup data retained in the V-VOL. However, while the backup data in the V-VOL is being physically duplicated in the P-VOL, the pair status is changed to Reverse Synchronizing. For a V-VOL being restored while the pair status is Reverse Synchronizing, no SnapShot or Read/Write instruction can be executed. When the ratio P-VOL: V-VOL is 1: n (n>1), Read/Write operation from/to the other V-VOL cannot be performed in the same way as the V-VOL that is being restored. When the restoration is completed, reading/writing from/to the other V-VOL in the Split status becomes possible again. When the V-VOL in the Split status becomes readable and writable again, it retains the data it had at the time of the SnapShot instruction, and is not affected by the restoration of the V-VOL data to the P-VOL.

The pair can be split while its status is Reverse Synchronizing, however, the P-VOL data being restored cannot be used logically. The V-VOLs correlated to the P-VOL and with a status other than Simplex are placed in Failure status. Do not split a pair while the pair status is Reverse Synchronizing, except when an emergency exists.

When the restoration instruction of the V-VOL data is issued to the P-VOL, the pair status is not changed to Paired immediately but is changed to Reverse Synchronizing. The data of the P-VOL, however, is replaced promptly with the backup data retained in the V-VOL. When the SnapShot instruction is issued to the other V-VOL after the issue of the restoration instruction, the V-VOL retains the data it had at the time of the SnapShot instruction based on the P-VOL data. That data is replaced with the backup data, even before the restoration completes.

Prelim

inary

Page 27: Copy on WriteSnapShotUsersGuide

15

Figure 2.7 Operation Example when SnapShot Operation is Performed to the Other V-VOL During the Restoration

P-VOL Data A

V-VOL1 Data B

V-VOL2 Simplex

Split P-VOLData B

V-VOL1

V-VOL2 Simplex

Synchronizing

Read/Write from host

Restoration to V-VOL1

P-VOL Data B

V-VOL1

V-VOL2 Data B

Split

When the P-VOL is un-mounted and the restoration operation is performed, the P-VOL data is replaced promptly with the data retained in the V-VOL1.

The P-VOL accepts an I/O instruction issued by a host and the V-VOL1 is retaining the backup data (data B).

P-VOLData B'

V-VOL1

V-VOL2 Data B

Split

SnapShot instruction to V-VOL2

When the SnapShot instruction is performed to the V-VOL2, it retains the replaced P-VOL data (data B) even if the V-VOL1 is in the COPY(RS) status.

The host I/O is executed for the P-VOL, whose data has been replaced, immediately after the restoration operation. W hen no write I/O instruction is issued to the P-VOL until the issue of the SnapShot instruction to the V-VOL2 after the restoration operation to the V-VOL1, the data of the V-VOL1 before the restoration is the same as that of the V-VOL2.

SynchronizingSynchronizing

Note: When no differential exists between a P-VOL and V-VOL to be restored, restoration is not completed immediately. It takes time to examine the differential between the P-VOL and V-VOL. The method of search for the differentials is “Search AII”.

: Bit that indicates differential data

The differential management area on the cache memory

: Search operation

The standard time required for the examination of the differential is shown below. When no I/O is issued by a host, capacity per LU is 100 GB, and the whole differential management area is searched (when the rate of coincidence is approximately 100%). Prel

imina

ry

Page 28: Copy on WriteSnapShotUsersGuide

16

Table 2.2 Time Required to Examine Differential

Equipment Type Copy Pace Searched Time (minutes)

11 to 15 (Fast copy) Approximately 6

6 to 10 (Medium copy) Approximately 6

AMS2100/AMS2300

1 to 5 (Slow copy) Approximately 15

11 to 15 (Fast copy) Approximately 5

6 to 10 (Medium copy) Approximately 5

AMS2500

1 to 5 (Slow copy) Approximately 10

Notes:

We recommend the copy pace be “Medium”. However, if you specify “Medium”, the time to complete the copying may differ according to the host I/O load. If you specify the copy pace to “Fast”, the host I/O performance deteriorates. When you want to suppress the deterioration of the host I/O performance further from the case you specify the copy pace to “Medium”, specify “Slow”.

The restoration command can be issued up to 128 P-VOLs at the same time. However, the number of P-VOLs to which the physical copying (background copying) from a V-VOL can be done at the same time is up to four (AMS2100/2300) per controller (AMS2500: eight per controller). When the background copying can be executing, background copies are completed in the order the command was issued. The other background copies are completed in the ascending order of LU numbers, after the preceding restoration is completed.

2.4.1.4 Pair Failures

When the state of V-VOL data is unstable as a result of a failure (such as the double failure of drives or the data pool capacity has exceeded the limit within the AMS array), the pair is placed in Failure status. When an event such as the above occurs during a restoration process, the P-VOL being restored becomes unable to accept Read/Write instructions and the V-VOL data becomes invalid. To resume the pair that was not retained, it is necessary to split the pair once using the button and execute the SnapShot instruction again. However the V-VOL that is created does not have the previously invalidated data but contains the P-VOL data at the time of the new SnapShot instruction. When the AMS array places a pair in the Failure status, CCI is output to the file of the system log or event log in order to inform a host of the operation.

In the following situation, the AMS array places a pair in the Failure status:

When a data pool in the AMS array cannot be accessed due to a double drive failure as a result of an array failure, the AMS array changes a pair in Split status to Failure status.

When the differential data cannot be saved because no additional free capacity from the data pool is available, the AMS array places all pairs in Failure status.

Prelim

inary

Page 29: Copy on WriteSnapShotUsersGuide

17

2.4.1.5 Pair Deleting

When the button is pushed, any pair in the Paired, Split, Synchronizing, or Failure status can be split at any time after it is placed in the Simplex status.

When the button is pushed, the V-VOL data is annulled immediately and invalidated. Therefore, if you access the V-VOL after the pair is split, the data retained before the pair is split is not available.

Prelim

inary

Page 30: Copy on WriteSnapShotUsersGuide

18

2.4.2 Pair Status

SnapShot displays the pair status of all SnapShot volumes (LUs). Figure 2.8 status and the SnapShot operations.

Figure 2.8 SnapShot Pair Status Transitions

Complete restoration

Simplex

P-VOL V-VOL

Failure

Not synchronized

P-VOL V-VOL

Paired

P-VOL V-VOL

Split

Error

Split the pair immediately after creation is completed

P-VOL V-VOL

Reverse synchronizing

Error

Create pair/Update pair

Table 2.3 lists and describes the SnapShot pair status conditions.

If a volume is not assigned to a SnapShot pair, its status is Simplex. When the button is pushed with the Split the pair immediately after creation is completed option specified to it, statuses of the P-VOL and the V-VOL change to Split.

It is possible to access the P-VOL or V-VOL in the Split state. The pair status changes to Failure (interruption) when the V-VOL cannot be created or updated, or when the V-VOL data cannot be retained due to an AMS array failure. When the button is pushed, the pair is split and the pair status changes to Simplex. Prel

imina

ry

Page 31: Copy on WriteSnapShotUsersGuide

19

Table 2.3 SnapShot Pair Status

Pair Status Description P-VOL V-VOL

Simplex This is a state in which no volume is assigned to a SnapShot pair. The P-VOL in the Simplex status accepts I/O operations of Read/Write. The V-VOL does not accept any Read/Write I/O operations.

Read and write.

Does not accept I/O operations (Read/Write)

Paired Paired is a pseudo status that exists in order to give interchangeability with the ShadowImage system. The actual status is the same as the Split. In this state, access from a host to a P-VOL is lowered.

Read and write.

Does not accept I/O operations (read/Write)

Reverse Synchronizing

Reverse Synchronizing is a status in which the backup data retained in the V-VOL is being restored to the P-VOL. In this status, Read/Write I/O operations are accepted for the P-VOL as before (in the Split status). The V-VOL will not accept Read/Write I/O operations. SnapShot instruction cannot be executed. Pair status will be returned to Paired after the restoration is completed. When a failure occurs or a pair is split during the restoration, the V-VOLs status is correlated with the P-VOL status becomes Failure.

Read and write.

Does not accept I/O operations (Read/Write)

Split Split is a status in which the P-VOL data at the time of the SnapShot instruction is retained in the V-VOL. When a change of the P-VOL data occurs, the P-VOL data at the time of the SnapShot instruction is retained as the V-VOL data. The P-VOL and V-VOL in the Split status accept Read/Write I/O operations. However, the V-VOL does not accept any Read/Write instruction while the P-VOL is being restored. In this state, access from a host to a P-VOL is lowered.

Read and write.

Read and write. (A Read/Write instruction is not acceptable during the P-VOL is being restored.)

Threshold Over

Threshold Over is a status in which the used rate of data pool reaches the threshold of data pool. However, Threshold Over usually operates as Split. To reference the pair status, you able to recognize as Threshold Over.

Read and write.

Read and write. (A Read/Write instruction is not acceptable during the P-VOL is being restored.)

Failure Failure is a status in which the P-VOL data at the time of the SnapShot instruction cannot be retained in the V-VOL due to a failure in the AMS array. In this status, I/O operations of Read/Write concerning the P-VOL are accepted as before (in the Split status). However, when a failure occurs during restoration, the P-VOL does not accept any Read/Write instruction. The V-VOL data has been invalidated at this point of time. To resume the split pair, it is required to execute the SnapShot instruction again after splitting the pair once. However, data of the V-VOL created is not the former version that was invalidated, but the P-VOL data at the time of the new SnapShot instruction.

Read and write. (The P-VOL does not accept a Read/Write instruction either when the pair status is Failure due to a failure that occurred during restoration.)

Does not accept I/O operations (Read/Write)

Note: The pair status is described based on the display of Navigator 2. Refer to Table 8.1 for the display of CCI in each pair status. Prel

imina

ry

Page 32: Copy on WriteSnapShotUsersGuide

20

2.5 Cascade Connection of SnapShot with TrueCopy

Volumes of SnapShot can be cascaded with those of TrueCopy as shown in the following figure. Because the cascade of SnapShot with TrueCopy lowers the performance, use it when it is necessary. Incidentally, SnapShot cannot be cascaded with ShadowImage.

Figure 2.9 Cascade Connection of SnapShot with TrueCopy

Local array

Host

TrueCopy

Remote array

P-VOL

V-VOL

SnapShot SnapShot

Read/Write

V-VOL

P-VOL Cascade Connection

Local array

Host

TrueCopy

Remote array

SnapShot

Read/Write

V-VOL

P-VOL

Cascade Connection

Cascade with a P-VOL of SnapShot

Cascade with a V-VOL of SnapShot

P-VOL

V-VOL

SnapShot

P-VOL S-VOL

P-VOL

S-VOL

Prelim

inary

Page 33: Copy on WriteSnapShotUsersGuide

21

2.5.1 Cascade Restrictions with P-VOL of SnapShot

When restore using SnapShot is executed, TrueCopy must be in the Split status. If restore using SnapShot is executed in the Synchronizing status or Paired status of TrueCopy, the data in the LUs for P-VOL that are cascaded using TrueCopy on the local side and the remote side cannot be assured of equality.

LU Shared with P-VOL on SnapShot and P-VOL on TrueCopy

It is shown in Table 2.4 whether a read/write from/to a P-VOL of SnapShot on the local side is possible or not in the case where a P-VOL of SnapShot and a P-VOL of TrueCopy are the same LU.

Table 2.4 A Read/Write Instruction to a P-VOL of SnapShot on the Local Side (TrueCopy)

SnapShot P-VOL TrueCopy P-VOL

Paired Synchronizing (Restore) Split Failure Failure (Restore)

Paired R/W

× R/W

R/W

×

Synchronizing R/W

× R/W

R/W

×

Split R/W

R/W

R/W

R/W

R/W

Δ R/W

R

R

× R

R

×

Failure R/W

R/W

Δ R/W

R/W

Δ R/W

Δ R/W

R

R

× R

Δ R

×

R/W

R/W

× R/W

Δ R/W

×

: A case possible, ×: A case impossible Δ: A case where a pair operation causes an error (a case that can occur as a result of a change of the pair status to Failure)

R/W: Read/Write by a host is possible. R: Read by a host is possible but write is impossible. W: Write by a host is possible but read is impossible. R/W: Read/Write by a host is impossible.

Note: Failure in this table excludes a condition in which access of an LU is not possible (for example, LU blockage).

One LU used for P-VOL on SnapShot and S-VOL on TrueCopy

It is shown in Table 2.5 whether a read/write from/to a P-VOL of SnapShot on the remote side is possible or not in the case where a P-VOL of SnapShot and an S-VOL of TrueCopy are the same LU.

Prelim

inary

Page 34: Copy on WriteSnapShotUsersGuide

22

Table 2.5 A Read/Write Instruction to a P-VOL of SnapShot on the Remote Side (TrueCopy)

SnapShot P-VOL TrueCopy S-VOL

Paired Synchronizing (Restore) Split Failure Failure (Restore)

Paired R

× R

R

×

Synchronizing R

× R

R

×

Split R/W

R/W

R/W

R/W

R/W

Δ R/W

R

R

× R

R

×

Failure R

× R

R

×

: A case possible, ×: A case impossible Δ: A case where a pair operation causes an error (a case that can occur as a result of a change of the pair status to Failure)

R/W: Read/Write by a host is possible. R: Read by a host is possible but write is impossible. W: Write by a host is possible but read is impossible. R/W: Read/Write by a host is impossible.

Note: Failure in this table excludes a condition in which access of an LU is not possible (for example, LU blockage).

V-VOLs number of SnapShot

V-VOLs of up to 32 generations can be made even in the case where the P-VOL of SnapShot is cascaded with the P-VOL and S-VOL of TrueCopy in the same way as in the case where no cascade connection is made.

2.5.2 Cascade Restrictions with V-VOL of SnapShot

Transition of statuses of TrueCopy and SnapShot pairs

As to cascade of an LU of TrueCopy with a V-VOL of SnapShot, it is supported only when the V-VOL of SnapShot and a P-VOL of TrueCopy are the same LU. Besides, operations of the SnapShot and TrueCopy pairs are restricted depending on statuses of the pairs.

When cascading volumes of TrueCopy with a V-VOL of SnapShot, create a SnapShot pair first. When a TrueCopy pair is created earlier, split the TrueCopy pair once and create a pair using SnapShot.

When changing a status of a SnapShot pair, a status of a TrueCopy pair must be Split or Failure. When changing a status of a TrueCopy pair, a status of a SnapShot pair must be Split.

It is shown in Table 2.6 whether a read/write from/to a V-VOL of SnapShot on the local side is possible or not in the case where a V-VOL of SnapShot and a P-VOL of TrueCopy are the same LU.

Prelim

inary

Page 35: Copy on WriteSnapShotUsersGuide

23

Table 2.6 A Read/Write Instruction to a V-VOL of SnapShot on the Local Side (TrueCopy)

SnapShot V-VOL TrueCopy P-VOL

Paired Synchronizing (Restore) Split Failure Failure (Restore)

Paired × × R/W

× ×

Synchronizing × × R/W

× ×

Split R/W

R/W

R/W

R/W

Δ R/W

Δ R/W

R

R/W

R/W

R

Δ R/W

Δ R/W

Failure R/W

R/W

R/W

R/W

Δ R/W

Δ R/W

R

R/W

R/W

R

Δ R/W

Δ R/W

R/W

R/W

R/W

R/W

Δ R/W

Δ R/W

: A case possible, ×: A case impossible Δ: A case where a pair operation causes an error (a case that can occur as a result of a change of the pair status to Failure)

R/W: Read/Write by a host is possible. R: Read by a host is possible but write is impossible. W: Write by a host is possible but read is impossible. R/W: Read/Write by a host is impossible.

Note: Failure in this table excludes a condition in which access of an LU is not possible (for example, LU blockage).

Prelim

inary

Page 36: Copy on WriteSnapShotUsersGuide

24

2.5.3 Restrictions Configuration on the Cascade of TrueCopy with SnapShot

The following shows an example of a configuration in which restrictions are placed on the cascade of TrueCopy with SnapShot.

Figure 2.10 Restrictions Configuration on the Cascade of TrueCopy with SnapShot

V-VOL TrueCopy

P-VOL

P-VOL

S-VOL

TrueCopy

S-VOL

P-VOL

V-VOL TrueCopy

P-VOL

P-VOL

S-VOL

TrueCopy

S-VOL

P-VOL

V-VOL

V-VOL

TrueCopy

P-VOL

P-VOL

S-VOL

TrueCopy

S-VOL

P-VOL

Local array Remote array

Local array Remote array

Local array Remote array

2.5.4 Cascade Restrictions with Data Pool of SnapShot

Neither TrueCopy/TCE pair nor ShadowImage pair can be created using a data pool.

Prelim

inary

Page 37: Copy on WriteSnapShotUsersGuide

25

2.6 Cascade Connection of SnapShot with TCE

Volumes of TCE can be cascaded with those of SnapShot P-VOL as shown in the following figure. Because the cascade of SnapShot with TCE lowers the performance.

Figure 2.11 Cascade Connection of SnapShot with TCE

Local array

Host

TCE

Remote array

P-VOL

V-VOL

SnapShot SnapShot

Read/Write

V-VOL

P-VOLCascade Connection

Local array

Host

TCE

Remote array

SnapShot

Read/Write

V-VOL

P-VOL

Cascade Connection

Cascade with a P-VOL of SnapShot

Cascade with a V-VOL of SnapShot

P-VOL

V-VOL

SnapShot

P-VOL S-VOL

P-VOL

S-VOL

The TCE pair can be cascaded only with the SnapShot P-VOL, but the following restriction is placed on the cascade connection.

The restoration of the SnapShot pair cascaded with the TCE P-VOL can be done only when the status of the TCE pair is Simplex, Split, or Pool Full.

When restoring the SnapShot pair cascaded with the TCE S-VOL, it is required to make the status of the TCE pair Simplex or Split. The restoration can be done in the Takeover status, but it cannot be done when the status is Busy in which the S-VOL is being restored using the data pool data.

Prelim

inary

Page 38: Copy on WriteSnapShotUsersGuide

26

When the TCE S-VOL is in the Busy status in which it is being restored using the data pool data, the Read/Write instruction cannot be issued to the SnapShot V-VOL cascaded with the TCE S-VOL.

SnapShot requires restart to store the resource for the data pool management in the cache memory at the time of installation. This resource, however, is common to the resource for the data pool management of TCE. Therefore, when using SnapShot and TCE together, restart it only once when either one is installed first.

V-VOLs of up to 32 generations can be made even in the case where the P-VOL of SnapShot is cascaded with the P-VOL and S-VOL of TCE in the same way as in the case where no cascade connection is made.

2.6.1 Cascade Restrictions with P-VOL of TCE

LU Shared with P-VOL on SnapShot and P-VOL on TCE

It is shown in Table 2.7 whether a read/write from/to a P-VOL of SnapShot on the local side is possible or not in the case where a P-VOL of SnapShot and a P-VOL of TCE are the same LU.

Table 2.7 A Read/Write Instruction to a P-VOL of SnapShot on the Local Side (TCE)

SnapShot P-VOL Status TCE P-VOL

Paired Synchronizing (Restore)

Split Threshold over Failure Failure (Restore)

Paired R/W

x R/W

R/W

R/W

x

Synchronizing R/W

x R/W

R/W

R/W

x

Split R/W

R/W

R/W

R/W

R/W

Δ R/W

Pool Full R/W

R/W

R/W

R/W

R/W

Δ R/W

Failure R/W

Δ R/W

R/W

R/W

Δ R/W

Δ R/W

: A case possible, ×: A case impossible Δ: A case where a pair operation causes an error (a case that can occur as a result of a change of the pair status to Failure)

R/W: Read/Write by a host is possible. R: Read by a host is possible but write is impossible. W: Write by a host is possible but read is impossible. R/W: Read/Write by a host is impossible.

Note: Failure in this table excludes a condition in which access of an LU is not possible (for example, LU blockage). Prel

imina

ry

Page 39: Copy on WriteSnapShotUsersGuide

27

2.6.2 Cascade Restrictions with S-VOL of TCE

One LU used for P-VOL on SnapShot and S-VOL on TCE

It is shown in Table 2.8 whether a read/write from/to a P-VOL of SnapShot on the remote side is possible or not in the case where a P-VOL of SnapShot and an S-VOL of TCE are the same LU.

Table 2.8 A Read/Write Instruction to a P-VOL of SnapShot on the Remote Side (TCE)

SnapShot P-VOL Status TCE S-VOL Status

Paired Synchronizing (Restore)

Split Threshold over Failure Failure (Restore)

Paired R

x R

R

R

x

Synchronizing R

x R

R

R

x

RW mode R/W

R/W

R/W

R/W

R/W

Δ R/W

Split

R mode R

x R

R

R

x

Inconsistent Δ R/W

x Δ R/W

Δ R/W

Δ R/W

x

Take over R/W

R/W

R/W

R/W

R/W

Δ R/W

Busy R/W

x R/W

R/W

R/W

x

Pool Full R

x R

R

Δ R

x

: A case possible, ×: A case impossible Δ: A case where a pair operation causes an error (a case that can occur as a result of a change of the pair status to Failure)

R/W: Read/Write by a host is possible. R: Read by a host is possible but write is impossible. W: Write by a host is possible but read is impossible. R/W: Read/Write by a host is impossible.

Note: Failure in this table excludes a condition in which access of an LU is not possible (for example, LU blockage).

Prelim

inary

Page 40: Copy on WriteSnapShotUsersGuide

28

Prelim

inary

Page 41: Copy on WriteSnapShotUsersGuide

29

Chapter 3 SnapShot Requirements This chapter describes SnapShot operational requirements and provides an overview of SnapShot management software. This chapter includes the following sections:

System Requirements (see section 3.1)

Management Software (see section 3.2)

Supported Capacity (see section 3.3)

Prelim

inary

Page 42: Copy on WriteSnapShotUsersGuide

30

3.1 System Requirements

3.1.1 SnapShot Requirements

Table 3.1 shows the environments and requirements of SnapShot.

Table 3.1 Environments and Requirements of SnapShot

Item Contents

Environments Firmware: Version 0832/B or more is required for AMS2100 or AMS2300 array of the H/W Rev. is 0100. Version 0840/A or more is required for AMS2500 array of the H/W Rev. is 0100. Version 0890/A or more is required for AMS2100/AMS2300/AMS2500 of the H/W Rev. is 0200.

Navigator 2: Version 3.21 or more is required for management PC for AMS2100 or AMS2300 array of the H/W Rev. is 0100. Version 4.00 or more is required for management PC for AMS2500 array of the H/W Rev. is 0100. Version 9.00 or more is required for management PC for AMS2100/AMS2300/AMS2500 of the H/W Rev. is 0200.

CCI: Version 01-21-03/06 or more is required for host only when CCI is used for the operation of SnapShot.

License key for SnapShot

Requirements Number of controllers: 2 (dual configuration)

Command devices: Max. 128 (The command device is required only when CCI is used for the operation of SnapShot. The command device LU size must be greater than or equal to 33 MB.)

Differential Management LUs: Max. 2 (The Differential Management LU size must be greater than or equal to 10 GB. It is recommended that two Differential Management LUs are set according to be created in different RAID group)

Data pool: Max. 64 (The data pool can handle two or more primary volumes and the differential data of two or more V-VOLs, sharing them for the each controller.)

Size of LU: The P-VOL size must equal the V-VOL LU size.

Prelim

inary

Page 43: Copy on WriteSnapShotUsersGuide

31

Note: The H/W Rev. can be displayed when an individual array is selected from the Arrays list using Navigator 2 of version 9.00 or more (see following figure).

Prelim

inary

Page 44: Copy on WriteSnapShotUsersGuide

32

3.2 Management Software

3.2.1 Navigator 2

Navigator 2 displays detailed SnapShot information and is used for configuring the SnapShot environment. Navigator 2 communicates directly with the AMS arrays via a local area network (LAN). Navigator 2 supports both a GUI and CLI user interface to perform configuration, monitoring, maintenance and system reduction.

SnapShot configuration tasks include setting up data pool and logical units, enabling or releasing a command device etc. Navigator 2 interfaces display important failure information for AMS arrays and maintenance information including the amount of differential data, the time difference between the P-VOL and the S-VOL, and the used capacity of a data pool. Navigator 2 can be used to increase the data pool capacity by adding more LUs to the data pool.

In the event of a system failure, Navigator 2 simplifies and expedites recovery procedures. System Reduction tasks such as deleting command devices and data pools can also be performed using Navigator 2.

3.2.2 Command Control Interface

CCI is used to display SnapShot volume information, create and manage SnapShot pairs, and issue commands for replication operations. CCI resides on the UNIX®/Windows® management host and interfaces with the AMS arrays through dedicated logical volumes. CCI commands can be issued from the UNIX®/Windows® command line or using a script file.

Prelim

inary

Page 45: Copy on WriteSnapShotUsersGuide

33

3.3 Supported Capacity

The SnapShot function restricts P-VOL/data pool capacity. The supported maximum capacity value varies, depending on the capacity ratio of P-VOL to data pool and cache memory. When using other copy system functions and SnapShot together, the maximum supported capacity of the P-VOL may be restricted further. Therefore, the supported capacity of P-VOL needs to meet the following two conditions.

Must be less than or equal to the maximum supported capacity calculated by the capacity ratio with data pool (see section 3.3.1)

Must be less than or equal to the maximum supported capacity at the time of the combined use with other copy system functions (see section 3.3.2)

Furthermore, the data pool used by SnapShot is in common with that used by TCE, it is required to construct a system in which a sufficient data pool capacity is secured when the SnapShot function is used together with TCE.

3.3.1 Maximum Supported Capacity of P-VOL and Data Pool for Each Cache Memory Capacity

Table 3.2 to Table 3.4 show the maximum supported capacities of the P-VOL and the data pool for each cache memory capacity, and the formula for calculating the previous capacities.

Table 3.2 Formula for Calculating Maximum Supported Capacity Value for P-VOL/Data Pool (AMS2100)

Capacity of Cache Memory Installed Capacity Spared for the Differential Data (Shared by SnapShot and TCE)

1 GB/CTL Not supported.

2 GB/CTL Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE capacity ÷ 5 + Total data pool capacity < 1.4 TB

4 GB/CTL Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE capacity ÷ 5 + Total data pool capacity < 6.2 TB

Table 3.3 Formula for Calculating Maximum Supported Capacity Value for P-VOL/Data Pool (AMS2300)

Capacity of Cache Memory Installed Capacity Spared for the Differential Data (Shared by SnapShot and TCE)

1 GB/CTL Not supported.

2 GB/CTL Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE capacity ÷ 5 + Total data pool capacity < 1.4 TB

4 GB/CTL Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE capacity ÷ 5 + Total data pool capacity < 6.2 TB

8 GB/CTL Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE capacity ÷ 5 + Total data pool capacity < 12.0 TB Prelim

inary

Page 46: Copy on WriteSnapShotUsersGuide

34

Table 3.4 Formula for Calculating Maximum Supported Capacity Value for P-VOL/Data Pool (AMS2500)

Capacity of Cache Memory Installed Capacity Spared for the Differential Data (Shared by SnapShot and TCE)

2 GB/CTL Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE capacity ÷ 5 + Total data pool capacity < 1.4 TB

4 GB/CTL Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE capacity ÷ 5 + Total data pool capacity < 4.7 TB

6 GB/CTL Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE capacity ÷ 5 + Total data pool capacity < 9.4 TB

8 GB/CTL Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE capacity ÷ 5 + Total data pool capacity < 12.0 TB

10 GB/CTL Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE capacity ÷ 5 + Total data pool capacity < 15.0 TB

12 GB/CTL Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE capacity ÷ 5 + Total data pool capacity < 18.0 TB

16 GB/CTL Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE capacity ÷ 5 + Total data pool capacity < 24.0 TB

Table 3.5 to Table 3.16 show the maximum supported capacity per capacity ratio calculated from the formula of Table 3.2 to Table 3.4.

Table 3.5 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 2 GB/CTL: AMS2100)

Total capacity of all the P-VOLs: Total capacity of all the data pools

Supported total capacity of all the P-VOLs (TB)

Supported total capacity of all the data pools (TB)

1:0.5 2.0 1.0

1:1 1.1 1.1

1:3 0.4 1.2

Table 3.6 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 4 GB/CTL: AMS2100)

Total capacity of all the P-VOLs: Total capacity of all the data pools

Supported total capacity of all the P-VOLs (TB)

Supported total capacity of all the data pools (TB)

1:0.5 8.8 4.4

1:1 5.1 5.1

1:3 1.9 5.7 Prelim

inary

Page 47: Copy on WriteSnapShotUsersGuide

35

Table 3.7 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 2 GB/CTL: AMS2300)

Total capacity of all the P-VOLs: Total capacity of all the data pools

Supported total capacity of all the P-VOLs (TB)

Supported total capacity of all the data pools (TB)

1:0.5 2.0 1.0

1:1 1.1 1.1

1:3 0.4 1.2

Table 3.8 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 4 GB/CTL: AMS2300)

Total capacity of all the P-VOLs: Total capacity of all the data pools

Supported total capacity of all the P-VOLs (TB)

Supported total capacity of all the data pools (TB)

1:0.5 8.8 4.4

1:1 5.1 5.1

1:3 1.9 5.7

Table 3.9 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 8 GB/CTL: AMS2300)

Total capacity of all the P-VOLs: Total capacity of all the data pools

Supported total capacity of all the P-VOLs (TB)

Supported total capacity of all the data pools (TB)

1:0.5 17.1 8.5

1:1 10.0 10.0

1:3 3.7 11.1

Table 3.10 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 2 GB/CTL: AMS2500)

Total capacity of all the P-VOLs: Total capacity of all the data pools

Supported total capacity of all the P-VOLs (TB)

Supported total capacity of all the data pools (TB)

1:0.5 2.0 1.0

1:1 1.1 1.1

1:3 0.4 1.2

Table 3.11 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 4 GB/CTL: AMS2500)

Total capacity of all the P-VOLs: Total capacity of all the data pools

Supported total capacity of all the P-VOLs (TB)

Supported total capacity of all the data pools (TB)

1:0.5 6.7 3.3

1:1 3.9 3.9

1:3 1.4 4.2

Prelim

inary

Page 48: Copy on WriteSnapShotUsersGuide

36

Table 3.12 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 6 GB/CTL: AMS2500)

Total capacity of all the P-VOLs: Total capacity of all the data pools

Supported total capacity of all the P-VOLs (TB)

Supported total capacity of all the data pools (TB)

1:0.5 13.4 6.7

1:1 7.8 7.8

1:3 2.9 8.7

Table 3.13 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 8 GB/CTL: AMS2500)

Total capacity of all the P-VOLs: Total capacity of all the data pools

Supported total capacity of all the P-VOLs (TB)

Supported total capacity of all the data pools (TB)

1:0.5 17.1 8.5

1:1 10.0 10.0

1:3 3.7 11.1

Table 3.14 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 10 GB/CTL: AMS2500)

Total capacity of all the P-VOLs: Total capacity of all the data pools

Supported total capacity of all the P-VOLs (TB)

Supported total capacity of all the data pools (TB)

1:0.5 21.4 10.7

1:1 12.5 12.5

1:3 4.6 13.8

Table 3.15 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 12 GB/CTL: AMS2500)

Total capacity of all the P-VOLs: Total capacity of all the data pools

Supported total capacity of all the P-VOLs (TB)

Supported total capacity of all the data pools (TB)

1:0.5 25.7 12.8

1:1 15.0 15.0

1:3 5.6 16.8

Table 3.16 Supported Capacity Value of the P-VOL/Data Pool (When Cache Memory is 16 GB/CTL: AMS2500)

Total capacity of all the P-VOLs: Total capacity of all the data pools

Supported total capacity of all the P-VOLs (TB)

Supported total capacity of all the data pools (TB)

1:0.5 34.2 17.1

1:1 20.0 20.0

1:3 7.5 22.5

Prelim

inary

Page 49: Copy on WriteSnapShotUsersGuide

37

Notes:

The capacity of each P-VOL is managed in units of 15.75 GB. When the P-VOL capacity is 17 GB, the P-VOL is regarded as using a capacity of 31.5 GB. When there are two P-VOLs; each has a capacity of 17 GB. They use a total capacity of 63 GB (= 31.5 GB×2), though the actual capacity is 34 GB (= 17 GB×2).

The capacity of each LU, which has been registered to data pool, is managed in of 3.2 GB units. When the LU capacity is 5 GB, the LU is regarded as using 6.4 GB capacity. When there are two LUs registered to a data pool, each has a 5 GB capacity. They use a total capacity of 12.8 GB (= 6.4 GB×2), though the actual capacity is 10 GB (= 5 GB×2).

For P-VOL/data pool supported volume of each cache memory, refer to the following graph.

Total capacity of data pool (TB)

8

6

10 20 30

2 GB/CTL

4

2

25 155

For AMS2100

0 0

Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE Capacity (TB)

4 GB/CTL

12

20 40 60

2 GB/CTL

4 GB/CTL

8

4

503010 0 0

Total capacity of data pool (TB)

For AMS2300

Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE Capacity (TB)

8 GB/CTL

Prelim

inary

Page 50: Copy on WriteSnapShotUsersGuide

38

18

40 80 120

12

6

100 60200 0

24

2 GB/CTL

4 GB/CTL

6 GB/CTL

8 GB/CTL

For AMS2500

Tota

l Dat

a P

ool C

apac

ity (T

B)

Total P-VOL of SnapShot and P-VOL (S-VOL) of TCE Capacity (TB)

10 GB/CTL

12 GB/CTL

16 GB/CTL

Prelim

inary

Page 51: Copy on WriteSnapShotUsersGuide

39

3.3.2 Maximum Supported Capacity of Concurrent Use of Other Copy Functions

The maximum capacity supported by SnapShot can be calculated from the following formula. The single maximum capacity of SnapShot is shown in Table 3.17.

SnapShot: Maximum supported capacity value of P-VOL (TB)

= Maximum SnapShot single capacity - (Total S-VOL capacity of ShadowImage ÷ 17) - (Total P-VOL and S-VOL capacity of TrueCopy ÷ 17) - (Total P-VOL and S-VOL capacity of TCE × 3)

Table 3.17 Single Maximum Capacity of SnapShot (TB)

Array Type Mounted Memory Capacity Single Maximum Capacity (TB)

1 GB/CTL Not supported.

2 GB/CTL 46

AMS2100

4 GB/CTL 56

1 GB/CTL Not supported.

2 GB/CTL 42

4 GB/CTL 116

AMS2300

8 GB/CTL 233

2 GB/CTL 30

4 GB/CTL 116

6 GB/CTL 163

8 GB/CTL 210

10 GB/CTL 280

12 GB/CTL 350

AMS2500

16 GB/CTL 420

Prelim

inary

Page 52: Copy on WriteSnapShotUsersGuide

40

Prelim

inary

Page 53: Copy on WriteSnapShotUsersGuide

41

Chapter 4 Setting Up Replication System

This chapter includes the following:

Recommendations (see section 4.1)

Determining Data Pool Capacity (see section 0)

Cautions and Restrictions (see section 4.3)

Installing and Uninstalling SnapShot (see section 4.4)

Operations for SnapShot Configuration (see section 4.5)

Prelim

inary

Page 54: Copy on WriteSnapShotUsersGuide

42

4.1 Recommendations

4.1.1 Pair Assignment

Do not assign a frequently updated LU to a pair.

When the pair status is Split, the old data is copied to a data pool volume when writing to a primary volume. Because the load on the processor in the controller is increased, the writing performance becomes limited. When the writing load becomes heavier due to: a large number of write operations, the writing of data with a large block size, frequent write I/O instructions, and continuous writing, the effect becomes the greater. Therefore, be strict in selection of the LU to which SnapShot is applied. When applying SnapShot to an LU bearing a heavy writing load, it is necessary to conside making loads on the other LUs lighter.

Use a small number of volumes within the same RAID group.

When volumes are assigned to the same RAID group and used as primary volumes, there may be situations where the host I/O for one of the volumes causes restriction on host I/O performance of the other volume(s) due to drive contention. Therefore, it is recommended that you assign few (one or two) primary volumes to the same RAID group (refer to section 4.1.2). When creating pairs within the same RAID group, standardize the controllers that control LUs in the same RAID group.

Make an exclusive RAID group for a data pool volume.

When another volume is assigned to a RAID group to which a data pool volume has been assigned, loads on drives are increased and their performance is restricted because primary volumes correspond to the data pool volume in common. Therefore, use a RAID group, to which a data pool volume is assigned, for the data pool volume only. There can be multiple data pool volumes in an AMS array. Please use different RAID groups for each data pool (refer to section 4.1.2).

For SnapShot, use the SAS drives, the SAS7.2K drives, or the SSD drives.

When a P-VOL and data pool are located in a RAID group made up of SATA drives, the performance of a host I/O is reduced because of the lower performance of the SATA drive. Therefore, you should assign a primary volume to a RAID group consists of SAS drives, SAS7.2K drives, or SSD drives (refer to section 4.1.2).

Assign four or more disks to the data disks.

When there aren’t enough data disks in the RAID group, the host performance and/or copying performance is reduced because read and write operations are restricted. When operating pairs with SnapShot, it is recommended that you use an LU consisting of four or more data disks. Prel

imina

ry

Page 55: Copy on WriteSnapShotUsersGuide

43

4.1.2 Locating P-VOLs and Data Pools

Do not locate the P-VOL and the data pool within the same ECC group of the same RAID group because:

– A single drive failure causes a degenerated status in the P-VOL and data pool.

– Performance decreases because processes, such as access to the P-VOL and data copying to the data pool, are concentrated on a single disk drive.

• •

• •

LUN0

P-VOL/Data pool (CTL0) P-VOL/Data pool (CTL1)

Not Recommended

Recommended

LUN0

LUN1

LUN2 LUN3

LUN2

• • LUN4 LUN5

P-VOL (CTL0) P-VOL (CTL1)

Data pool (CTL0) Data pool (CTL1)

Prelim

inary

Page 56: Copy on WriteSnapShotUsersGuide

44

Notes on locating multiple LUs within the same drive column

• •

LUN0

LUN1

LUN2 LUN3

LUN4 LUN5

Data pool (CTL0)

P-VOL (CTL0)

• • •

P-VOL (CTL1)

Data pool (CTL1)

If multiple LUs are set within the same drive and each pair states differs, it is difficult to estimate the performance in order to design the system operational settings. For example, when the LU0 and LU2 are both P-VOLs and exist within the same group in the same drive (V-VOLs are located in different drive group), and when the LU0 is in Reverse synchronizing status and the LU2 is in Split status.

Pair status differences when setting multiple pairs

If you have set a single LU per drive group, retain the status of pairs (such as Simplex and Split) when setting multiple SnapShot pairs. If each SnapShot pair status differs, it becomes difficult to estimate the performance when designing the system operational settings.

Choosing the drive types of a RAID group in which a P-VOL/data pool will be located

For optimal performance, a P-VOL should be located in a RAID group which contains SAS drives, SAS7.2K drives, or SSD drives. When a P-VOL or data pool is located in a RAID group made up of SATA drives, the host I/O performance is lessened due to the decreased performance of the SATA drive. You should assign a primary volume to a RAID group consisting of SAS drives, SAS7.2K drives, or SSD drives.

Prelim

inary

Page 57: Copy on WriteSnapShotUsersGuide

45

4.1.3 P-VOLs and Data Pools in a RAID Configuration

You must use a RAID level with redundancy for both the P-VOL and data pool as RAID 0 is not supported.

A RAID level and/or a number of drives in RAID (N of ND+1P or ND+NP) of a P-VOL are not identical to those of a data pool. However, to improve performance, designate the RAID levels and numbers of drives to be identical respectively.

Table 4.1 P-VOL and Data Pool RAID Configuration

P-VOL Data Pool Amount of User Data Total Amount of SnapShot

Share of User Data

RAID 1+0 (N = 1 to 8)

RAID 1+0 (N = 1 to 8)

1 4 1/4

RAID 1+0 (N = 1 to 8)

RAID 5 (see Note 1) (N = 4)

1 2+1.25 = 3.25 1/3.25

RAID 5 (see Note 1) (N = 4)

RAID 1+0 (N = 1 to 8)

1 1.25+2 = 3.25 1/3.25

RAID 5 (see Note 1) (When N = 4)

RAID 5 (see Note 1) (When N = 4)

1 1.25+1.25 = 2.5 1/2.5

RAID 5 (see Note 1) (When N = 8)

RAID 5 (see Note 1) (When N = 8)

1 1.125+1.125 = 2.25 1/2.25

RAID 6 (see Note 2) (When N = 4)

RAID 6 (see Note 2) (When N = 4)

1 1.5+1.5 = 3 1/3

RAID 6 (see Note 2) (When N = 8)

RAID 6 (see Note 2) (When N = 8)

1 1.25+1.25 = 2.5 1/2.5

RAID 6 (see Note 2) (When N = 4)

RAID 5 (see Note 1) (When N = 4)

1 1.5+1.25 = 2.75 1/2.75

Note 1: Capacity = (1+1/N) where N = Number of data drives in RAID

Note 2: Capacity = (1+2/N) where N = Number of data drives in RAID

RAID 5 (4D+1)/RAID 5 (4D+1) is the recommended configuration because:

– 4D+1P is a recommended configuration for performance. It is also a balanced ratio of redundancy and user data related to RAID 5.

4.1.4 Command Devices

When two or more command devices are set within the one AMS array, assign them to their respective RAID groups. If they are assigned to the same RAID group both command devices become unavailable due to a system malfunction, such as a drive failure. Prel

imina

ry

Page 58: Copy on WriteSnapShotUsersGuide

46

4.1.5 Differential Management LUs

When two Differential Management LUs are set within the one AMS array, assign them to the respective RAID groups. If they are assigned to the same RAID group, both Differential Management LUs become unavailable due to difficulties, such as a drive failure.

4.1.6 LU Ownership of P-VOLs and Data Pools

The load balancing function is not applied to the LUs specified as a SnapShot pair.

The ownership of the LU specified in the S-VOl of the SnapShot pair is the same as the ownership of the LU specified in the data pool. For example, if creating a SnapShot pair by specifying the LU whose ownership is controller 0 as a P-VOL and assigned the LU whose ownership is controller 1 as a data pool, the ownership of the LU specified in the P-VOL is changed to controller 1.

If two or more SnapShot pairs share the same data pool, the ownerships of all the pairs are biased toward the same controller and the load is concentrated. To diversify the load, create two or more data pools whose ownerships differ, and specify the data pool to be equal when creating a SnapShot pair.

P-VOL

Controller 0 Controller 1

Pair creating

Data pool

Furthermore, when adding LUs to increase the data pool capacity, if the ownership of the LU already allocated to the data pool and the ownership of the LU to be added have different controllers, the ownership of the LU to be added is changed to the ownership of the already allocated LU. Prel

imina

ry

Page 59: Copy on WriteSnapShotUsersGuide

47

Controller 0 Controller 1

Adding LU

Data pool

Prelim

inary

Page 60: Copy on WriteSnapShotUsersGuide

48

4.2 Determining Data Pool Capacity

The factors that determine the data pool capacity include:

– A total capacity of P-VOLs

– A number of generations (a number of V-VOLs)

– An interval of the SnapShot instructions (a period for holding the V-VOL)

– An amount of data updated during the interval and the spare capacity (safety rate)

The method for calculating the data pool capacity is:

Data pool capacity = P-VOL capacity x (amount of renewed data x safety rate) x a number of generations.

When restoring backup data stored in a tape device, add more than the P-VOL capacity, the recommendation is 1.5 times of the P-VOL capacity or more) to the data pool capacity computed from the above formula. This will provide sufficient free data pool capacity, larger than the P-VOL capacity.

Generally, the rate of updated data amount per day is approximately 10%.

If one V-VOL was created for 1 TB P-VOL and SnapShot instruction was given once a day, the recommended data pool capacity will be approximately 250 GB, with the safety rate as approximately 2.5 times, considering the variance in amount of data renewal due to locality of access and operations.

When five V-VOLs are made per P-VOL of 1 TB and one SnapShot instruction is issued to one of the five V-VOLs a day (each of the V-VOLs holds data for a period of five days), the recommended data pool capacity is about 1.2 TB (five times the capacity in when only one V-VOL is made).

The recommended value of the data pool capacity when a capacity of one P-VOL is 1 TB is shown in the following table. Multiply the value in the following table by a value of a capacity per V-VOL in TB.

The value shown above is merely a standard because the amount of data actually accumulated in the data pool varies depending on the application, amount of processed data, and time zone of operation, etc. If a data pool has a small capacity, it will become full and all V-VOLs will be placed in the Failure status. When introducing SnapShot, provide a data pool with a sufficient capacity and verify the data pool capacity beforehand. Monitor the used capacity because it changes, depending on the system operation.

Prelim

inary

Page 61: Copy on WriteSnapShotUsersGuide

49

Table 4.2 Recommended Value of the Data Pool Capacity (When the P-VOL Capacity is 1 TB)

V-VOL Number (n) An interval of SnapShot Instructions (see Note 1)

1 2 3 4 5 6 to 14

From one to four hours 0.10 TB 0.20 TB 0.30 TB 0.40 TB 0.50 TB 0.80 TB (*1)

From four to eight hours 0.15 TB 0.30 TB 0.45 TB 0.60 TB 0.75 TB –

From eight to 12 hours 0.20 TB 0.40 TB 0.60 TB 0.80 TB – –

From 12 to 24 hours 0.25 TB 0.50 TB 0.75 TB – – –

(*1): The V-VOL number (n) is 6 to 8. –: The recommended value is nothing reason for the maximum supported capacity over.

Note 1: An interval of SnapShot instructions means a time between SnapShot instructions issued to the designated P-VOL. When there is only one V-VOL, the interval of the SnapShot instructions is as long as the period for retaining the V-VOL. When there are two or more V-VOLs, the interval of the SnapShot instructions multiplied by the number of the V-VOLs is the period for retaining the one V-VOL.

0 8 16 0 (Time)

V-VOL1

V-VOL2

V-VOL3An interval of the SnapShot instructions

8

The period for retaining the V-VOL : SnapShot instructions

Note 2: Construct a system in which the interval of SnapShot instructions is less than one day. It becomes difficult, depending on system environment to estimate the amount of data accumulated in the data pool when the interval of the SnapShot instructions is long.

When setting two or more pairs (P-VOLs) per POOL, count the POOL capacity by calculating a data pool capacity necessary for each pair and adding the calculated values together. Prel

imina

ry

Page 62: Copy on WriteSnapShotUsersGuide

50

Pair 1

500 GB

Pair 2

One generation

Pair 3

100 GB Five generation Ten generation

P-VOL P-VOL P-VOLV-VOL

V-VOL V-VOL

50 GB

Data pool capacity required for pair 1: 125 GB

Data pool capacity required for pair 2: 125 GB

Data pool capacity required for pair 3: 125 GB + +

Data pool capacity required: 125 GB+125 GB+125 GB=375 GB

V-VOL

Up to 64 LUs can be set for a data pool and a capacity of the data pool can be expanded through addition of the LU(s) in the online status (while the SnapShot pair is created). Therefore, at the time of the initial system configuration, prepare a spare LU to be used when the data pool capacity becomes insufficient. (This is recommended.) Additional care should be taken that all the V-VOLs using the data pool are dissolved in order to return the LU assigned to the data pool to an ordinary LU.

When returning the backup data from a tape device to the V-VOL, a free data pool with a capacity larger than the P-VOL capacity is required. More than 1.5 times the P-VOL capacity is recommended.

Note: An LU with a SAS drives, an LU with a SAS7.2K drives, an LU with an SSD drives, and an LU with a SATA drives cannot coexist in the data pool.

Prelim

inary

Page 63: Copy on WriteSnapShotUsersGuide

51

4.3 Cautions and Restrictions

This section describes the restrictions for configuration.

4.3.1 Specifying P-VOL and V-VOL when Pair Operation

The number that is used to specify the P-VOL and V-VOL for pair operation is LUN not H-LUN, which is recognized by the hosts.

The way to confirm the H-LUN is shown below using the case of Windows Server 2003 as an example.

1. Start the “Computer Management” from “Control Panel” in the Windows Server 2003 and select “Disk Administrator”.

In the right side of the displayed window, “disk” that is recognized by Windows Server 2003 are listed.

2. Right-click the “disk” that you want to know the H-LUN and select “property”.

The number that is displayed at right of “LUN” in the dialog window is the H-LUN.

The way to know the mapping of H-LUN and LUN is shown below.

For Fibre Channel interface:

1. Start Navigator 2.

2. Select the AMS array and click the Host Groups icon in the Groups tree.

3. Click the Host Group that the volume is mapped to.

4. Click the Logical Units tab.

The list of volume that is mapped to the Host Group is displayed and you can confirm the LUN that is mapped to the H-LUN.

For iSCSI interface:

1. Start Navigator 2.

2. Select the AMS array and click the iSCSI Targets icon in the Groups tree.

3. Click the iSCSI Target that the volume is mapped to.

4. Click the Logical Units tab.

The list of volume that is mapped to the iSCSI Target is displayed and you can confirm the LUN that is mapped to the H-LUN. Prel

imina

ry

Page 64: Copy on WriteSnapShotUsersGuide

52

4.3.2 LU Mapping and SnapShot Configuration

When pair operation using CCI, the P-VOL and V-VOL, whose mapping information is not set for the port that has been set in the configuration definition file (not recognizable from a host), cannot be paired. Use the LUN Manager if you do not want them recognized by a host.

4.3.3 Cluster Software, Path Switching Software, and SnapShot Configuration

Do not make the V-VOL an object of the cluster software and the path switching software.

4.3.4 MSCS and SnapShot Configuration

When setting the V-VOL to be recognized by the host, use the CCI mount command instead of using the Disk Administrator.

The same host cannot recognize both P-VOL and V-VOL. Have the host recognize the P-VOL and let another host recognize the V-VOL.

Do not place the MSCS Quorum Disk in CCI.

Shutdown MSCS before executing the CCI sync command.

The command device cannot be shared between the different hosts in the cluster.

Assign the exclusive command device to each host.

4.3.5 AIX and SnapShot Configuration

The same host cannot recognize both P-VOL and V-VOL. Have the host recognize only P-VOL and let another host recognize V-VOL.

Multiple V-VOLs per P-VOL cannot be recognized from the one host. Make V-VOL limit recognition from a host to only one V-VOL per P-VOL.

4.3.6 VxVM and SnapShot Configuration

If you have set the P-VOL and the V-VOL to be recognized by the same host, the VxVM will not operate properly. Set only the P-VOL to be recognized by the host and let another host recognize V-VOL.

4.3.7 Windows 2000 and SnapShot Configuration

Multiple V-VOLs per P-VOL cannot be recognized from the one host. Make V-VOL recognized to limit recognition from a host to only one V-VOL per P-VOL.

Prelim

inary

Page 65: Copy on WriteSnapShotUsersGuide

53

In order to make a consistent backup using a storage-based replication such as SnapShot, you must have a way to flush the data residing on the server memory to the array, so that the source volume of the replication has the complete data. When mounting a volume, use mount command of CCI even if Navigator 2 GUI or CLI is used to operate the pairs. Do not use mountvol command that is included in Windows 2000 by standard. You can flush the date on the server memory using the umount command of CCI to unmount the volume. (For more detail about mount/umount command, see the Hitachi Adaptable Modular Storage Command Control Interface (CCI) Reference Guide.

4.3.8 Windows Server 2003/Windows Server 2008 and SnapShot Configuration

Multiple V-VOLs per P-VOL cannot be recognized from the one host. Make V-VOL recognized to limit recognition from a host to only one V-VOL per P-VOL.

In order to make a consistent backup using a storage-based replication such as SnapShot, you must have a way to flush the data residing on the server memory to the array, so that the source volume of the replication has the complete data. You can flush the date on the server memory using the umount command of CCI to unmount the volume. When using the umount command of CCI for unmount, use the mount command of CCI for mount. (For more detail about mount/umount command, see the Hitachi Adaptable Modular Storage Command Control Interface (CCI) Reference Guide. If you are using Windows Server™ 2003, mountvol /P to flush data on the server memory when un-mounting the volume is supported. Please understand the specification of the command and run sufficient test before you use it for your operation. In Windows Server™ 2008, use the umount command of CCI to flush the data on the memory of the server at the time of the unmount. Do not use the mountvol command of Windows standard. Refer to the Hitachi Adaptable Modular Storage Command Control Interface (CCI) Reference Guide for the detail of the restrictions of Windows Server™ 2008 when using the mount/umount command.

In Windows Server™ 2008, set only the P-VOL to be recognized by the host and let another host recognize the V-VOL. When you have created two or more V-VOLs for one P-VOL, do not make the same host recognize two or more V-VOLs (shared P-VOLs) at the same time.

When CCI is used to operate pairs. When describing a command device in the configuration definition file, specify it as Volume{GUID}. (For more detail, see the Hitachi Adaptable Modular Storage Command Control Interface (CCI) Reference Guide.

When a path becomes detached, which can be caused by a controller detachment or interface failure and remains detached for longer than one minute, the command device may not be recognized when path recovery is made. Execute the “re-scan the disks” of Windows to make recovery. Restart CCI if Windows cannot access the command device even if CCI is able to recognize it.

Windows may write for the un-mounted volume. If a pair is resynchronized while remaining the data to the V-VOL on the memory of the server, the compatible backup cannot be collected. Therefore, execute the sync command of CCI immediately before re-synchronizing the pair for the un-mounted V-VOL.

Prelim

inary

Page 66: Copy on WriteSnapShotUsersGuide

54

4.3.9 Linux and LVM Configuration

If you have set the P-VOL and the V-VOL to be recognized by the same host, the LVM will not operate properly. Set only the P-VOL to be recognized by the host and let another host recognize V-VOL.

4.3.10 Tru64 UNIX and SnapShot Configuration

When rebooting the host, it may take some time before host is up if V-VOL is in a state other than Split. When rebooting the host, be sure that V-VOL is Split, or do not have the host recognize V-VOL that is not Split by using LUN Manager.

4.3.11 Windows Server 2008/Windows Server 2003/Windows 2000 and Dynamic Disk

In an environment of the Windows Server 2000/Windows Server 2003/Windows Server 2008, you cannot use SnapShot pair volumes as dynamic disk. The reason for this restriction is because in this case if you restart Windows or use the Rescan Disks command after creating or re-synchronizing a SnapShot pair, there are cases where the V-VOL is displayed as Foreign in Disk Management and become inaccessable.

4.3.12 VMWare and SnapShot Configuration

When creating a backup of the virtual disk in the vmfs format using SnapShot, shutdown the virtual machine which accesses the virtual disk, and then split the pair.

If one LU is shared by multiple virtual machines, it is required to shutdown all the virtual machines which share the LU when creating a backup. Therefore, it is not recommended to share one LU by multiple virtual machines in the configuration which creates a backup using SnapShot.

The VMWare ESX has a function to clone the virtual machine. Although the ESX clone function and SnapShot can be linked, cautions are required for the performance at the time of execution. For example, when the LU which becomes the ESX clone destination is a SnapShot P-VOL pair whose pair status is Split, since the old data is written to the data pool for writing to the P-VOL, the time required for a clone may become longer and the clone may be terminated abnormally in some cases. To avoid this, we recommend the operation to make the SnapShot pair status Simplex and to create and split the pair after executing the ESX clone. Also, it is the same for executing the functions such as migration the virtual machine, deploying from the template and inflating the virtual disk. Prel

imina

ry

Page 67: Copy on WriteSnapShotUsersGuide

55

ESX

P-VOL

V-VOL

Clone Simplex

4.3.13 Concurrent Use of Cache Partition Manager

When SnapShot is used together with Cache Partition Manager, see the section 2.3.2 Notes of the Hitachi Adaptable Modular Storage Cache Partition Manager User’s Guide.

4.3.14 Concurrent Use of Dynamic Provisioning

When the array firmware version is less than 0893/A, the DP-VOLs created by Dynamic Provisioning cannot be set for a P-VOL of SnapShot. Moreover, when the array firmware version is less than 0893/A, the DP-VOLs cannot be added to the data pool used by SnapShot and TCE.

Depending on the installed cache memory, Dynamic Provisioning and SnapShot may not be unlocked at the same time. To unlock Dynamic Provisioning and SnapShot at the same time, add cache memories. For the capacity of the supported cache memory, refer to the section 4.3.15.

The data pool used by SnapShot and TCE cannot be used as a DP pool of Dynamic Provisioning. Moreover, the DP pool used by Dynamic Provisioning cannot be used as data pools of SnapShot and TCE.

When the array firmware version is 0893/A or more, the DP-VOLs created by Dynamic Provisioning can be set for a P-VOL or a data pool of SnapShot. However, the normal LU and the DP-VOL cannot coexist in the same data pool.

The points to keep in mind when using SnapShot and Dynamic Provisioning together are described here. Refer to the Hitachi Adaptable Modular Storage Dynamic Provisioning User’s Guide for the detailed information regarding Dynamic Provisioning. Hereinafter, the LU created in the RAID group is called a normal LU and the LU created in the DP pool that is created by Dynamic Provisioning is called a DP-VOL.

LU type that can be set for a P-VOL or a data pool of SnapShot

The DP-VOL created by Dynamic Provisioning can be used for a P-VOL or a data pool of SnapShot. Table 4.3 shows a combination of a DP-VOL and a normal LU that can be used for a P-VOL or a data pool of SnapShot.

Prelim

inary

Page 68: Copy on WriteSnapShotUsersGuide

56

Table 4.3 Combination of a DP-VOL and a Normal LU

P-VOL Data Pool Contents

DP-VOL DP-VOL Available. When the pair status is Split, the data pool consumed capacity can be reduced compared to the normal LU.

DP-VOL Normal LU Available. The P-VOL consumed capacity can be reduced compared to the normal LU.

Normal LU DP-VOL Available. When the pair status is Split, the data pool consumed capacity can be reduced compared to the normal LU.

Assigning the controlled processor core of a P-VOL or a data pool which uses the DP-VOL

When the controlled processor core of the DP-VOL used for a P-VOL or used for a data pool of SnapShot differs as well as the normal LU, switch the P-VOL controlled processor core assignment to the data pool controlled processor core automatically and create a pair. (In case of AMS2500)

DP pool designation of a P-VOL or a data pool which uses the DP-VOL

When using the DP-VOL created by Dynamic Provisioning for a P-VOL or a data pool of SnapShot, using the DP-VOL designated in separate DP pool of a P-VOL and a data pool is recommended considering the performance.

Setting the capacity when placing the DP-VOL in the data pool

When the pair status is Split, the old data is copied to the data pool while writing to the P-VOL. When using the DP-VOL created by Dynamic Provisioning as the data pool of SnapShot, the consumed capacity of the DP-VOL in the data pool is increased by storing the old data in the data pool. If the DP-VOL of more than or equal to the DP pool capacity is created and used for the data pool, this processing may deplete the DP pool capacity. When using the DP-VOL for the data pool of SnapShot, it is recommended to set the capacity making the over provisioning ratio 100% or less so that the DP pool capacity does not deplete.

Furthermore, the threshold value of the data pool of SnapShot and the threshold value of the DP pool differ. The use rate of the SnapShot data pool goes down by re-synchronizing or deleting the SnapShot pair which uses the data pool. However, since the used area is not released, even if the data pool use rate of SnapShot shows 10% or less, the DP pool consumed capacity may have exceeded Depletion Alert. Check whether the actual use rate falls below the respective threshold values of the data pool and DP pool of SnapShot. Note that, for the DP pool which has a data pool, even if 0 data is deleted, there is no reduction effect on the consumed capacity of the DP-VOL allocated to the data pool.

Pair status at the time of DP pool capacity depletion

When the DP pool is depleted after operating the SnapShot pair which uses the DP-VOL created by Dynamic Provisioning, the pair status of the pair concerned may be a Failure. Hereinafter, Table 4.4 shows the pair statuses before and after the DP pool capacity depletion. When the pair status becomes a Failure caused by the DP pool capacity depletion, add the DP pool capacity whose capacity is depleted, and execute the pair operation again. Prel

imina

ry

Page 69: Copy on WriteSnapShotUsersGuide

57

Table 4.4 Pair Statuses before the DP Pool Capacity Depletion and Pair Statuses after the DP Pool Capacity Depletion

Pair Statuses before the DP Pool Capacity Depletion

Pair Statuses after the DP Pool Capacity Depletion belonging to P-VOL

Pair Statuses after the DP Pool Capacity Depletion belonging to Data Pool

Simplex Simplex Simplex

Reverse Synchronizing Reverse Synchronizing Failure (Note)

Reverse Synchronizing Failure (Note)

Paired Paired Paired

Split Split Failure (Note)

Split Failure (Note)

Failure Failure Failure

Note: When write is performed to the P-VOL or V-VOL to which the capacity depletion DP pool belongs, the copy cannot be continued and the pair status becomes a Failure.

DP pool status and availability of pair operation

When using the DP-VOL created by Dynamic Provisioning for a P-VOL or a data pool of the SnapShot pair, the pair operation may not be executed depending on the status of the DP pool to which the DP-VOL belongs. Table 4.5 shows the DP pool status and availability of the SnapShot pair operation. When the pair operation fails due to the DP pool status, correct the DP pool status and execute the pair operation again.

Table 4.5 DP Pool Statuses and Availability of SnapShot Pair Operation

DP Pool Statuses, DP Pool Capacity Statuses, and DP Pool Optimization Statuses Pair Operation

Normal Capacity in Growth

Capacity Depletion Regressed Blocked DP in Optimization

Create pair × ×

Create pair (split option) × ×

Split pair × ×

Resync pair × ×

Restore pair × ×

Delete pair

Note: When the DP pool was created or the capacity was added, the formatting operates for the DP pool. If pair creation, pair resynchronization, or restoration is performed during the formatting, depletion of the usable capacity may occur. Since the formatting progress is displayed when checking the DP pool status, check if the sufficient usable capacity is secured according to the formatting progress, and then start the operation.

Operation of the DP-VOL during SnapShot use Prelim

inary

Page 70: Copy on WriteSnapShotUsersGuide

58

When using the DP-VOL created by Dynamic Provisioning for a P-VOL or a data pool of SnapShot, any of the operations among the capacity growing, capacity shrinking and LU deletion of the DP-VOL in use cannot be executed. To execute the operation, delete SnapShot pair of which the DP-VOL to be operated is in use, and then execute it again.

Operation of the DP pool during SnapShot use

When using the DP-VOL created by Dynamic Provisioning for a P-VOL or a data pool of SnapShot, the DP pool to which the DP-VOL in use belongs cannot be deleted. To execute the operation, delete the SnapShot pair of which the DP-VOL is in use belonging to the DP pool to be operated, and then execute it again. The attribute edit and capacity addition of the DP pool can be executed usually regardless of the SnapShot pair.

Cascade connection

A cascade can be performed on the same conditions as the normal LU (refer to the section 2.5 and the section 2.6). However, the firmware version of the array including the DP-VOL needs to be 0893/A or more.

Prelim

inary

Page 71: Copy on WriteSnapShotUsersGuide

59

4.3.15 User Data Area of Cache Memory

If SnapShot is used, in order to secure a part of the cache memory, the user data area of the cache memory decreases. Moreover, by using TCE and Dynamic Provisioning together, the user data area may further decrease. Table 4.6 to Table 4.8 show the cache memory secured capacity and the user data area at the time of using the program product. For Dynamic Provisioning, the user data area differs depending on DP Capacity Mode. Refer to the Hitachi Adaptable Modular Storage Dynamic Provisioning User’s Guide for the detailed information.

The performance effect by reducing the user data area is shown when a large amount of sequential write was executed at the same time, but it is deteriorated by a few percent at the time of writing 100 LUs at the same time.

Table 4.6 Supported Capacity of the Regular Capacity Mode

Array Type (the H/W Rev. is 0100)

Cache Memory

Management Capacity for Dynamic Provisioning

Capacity Secured for SnapShot or TCE

Capacity Secured for Dynamic Provisioning and TCE or SnapShot

User Data Area when Dynamic Provisioning, TCE, and SnapShot are Disabled

User Data Area when Using Dynamic Provisioning

User Data Area when Using Dynamic Provisioning and TCE or SnapShot

1 GB/CTL – – 590 MB 590 MB N/A

2 GB/CTL 512 MB 580 MB 1,520 MB 1,440 MB 940 MB

AMS2100

4 GB/CTL

80 MB

2 GB 2,120 MB 3,520 MB 3,460 MB 1,400 MB

1 GB/CTL – – 500 MB 500 MB N/A

2 GB/CTL 512 MB 660 MB 1,440 MB 1,300 MB 780 MB

4 GB/CTL 2 GB 2,200 MB 3,280 MB 3,120 MB 1,080 MB

AMS2300

8 GB/CTL

140 MB

4 GB 4,240 MB 7,160 MB 7,020 MB 2,920 MB

2 GB/CTL 512 MB 800 MB 1,150 MB 850 MB N/A

4 GB/CTL 1.5 GB 1,830 MB 2,960 MB 2,660 MB 1,130 MB

6 GB/CTL 3 GB 3,360 MB 4,840 MB 4,560 MB 1,480 MB

8 GB/CTL 4 GB 4,400 MB 6,740 MB 6,440 MB 2,340 MB

10 GB/CTL 5 GB 5,420 MB 8,620 MB 8,320 MB 3,200 MB

12 GB/CTL 6 GB 6,440 MB 10,500 MB 10,200 MB 4,060 MB

AMS2500

16 GB/CTL

300 MB

8 GB 8,480 MB 14,420 MB 14,120 MB 5,940 MB Prelim

inary

Page 72: Copy on WriteSnapShotUsersGuide

60

Table 4.7 Supported Capacity of the Regular Capacity Mode

Array Type (the H/W Rev. is 0200)

Cache Memory

Management Capacity for Dynamic Provisioning

Capacity Secured for SnapShot or TCE

Capacity Secured for Dynamic Provisioning and TCE or SnapShot

User Data Area when Dynamic Provisioning, TCE, and SnapShot are Disabled

User Data Area when Using Dynamic Provisioning

User Data Area when Using Dynamic Provisioning and TCE or SnapShot

1 GB/CTL – – 590 MB 590 MB N/A

2 GB/CTL 512 MB 580 MB 1,390 MB 1,310 MB 810 MB

AMS2100

4 GB/CTL

80 MB

2 GB 2,120 MB 3,360 MB 3,280 MB 1,220 MB

1 GB/CTL – – 500 MB 500 MB N/A

2 GB/CTL 512 MB 660 MB 1,340 MB 1,200 MB 680 MB

4 GB/CTL 2 GB 2,200 MB 3,110 MB 2,970 MB 930 MB

AMS2300

8 GB/CTL

140 MB

4 GB 4,240 MB 6,940 MB 6,800 MB 2,700 MB

2 GB/CTL 512 MB 800 MB 1,150 MB 850 MB N/A

4 GB/CTL 1.5 GB 1,830 MB 2,780 MB 2,480 MB 950 MB

6 GB/CTL 3 GB 3,360 MB 4,660 MB 4,360 MB 1,280 MB

8 GB/CTL 4 GB 4,400 MB 6,440 MB 6,140 MB 2,040 MB

10 GB/CTL 5 GB 5,420 MB 8,320 MB 8,020 MB 2,900 MB

12 GB/CTL 6 GB 6,440 MB 9,980 MB 9,680 MB 3,540 MB

AMS2500

16 GB/CTL

300 MB

8 GB 8,480 MB 14,060 MB 13,760 MB 5,580 MB

Prelim

inary

Page 73: Copy on WriteSnapShotUsersGuide

61

Table 4.8 Supported Capacity of the Maximum Capacity Mode

Array Type (the H/W Rev. is 0200)

Cache Memory

Management Capacity for Dynamic Provisioning

Capacity Secured for SnapShot or TCE

Capacity Secured for Dynamic Provisioning and TCE or SnapShot

User Data Area when Dynamic Provisioning, TCE, and SnapShot are Disabled

User Data Area when Using Dynamic Provisioning

User Data Area when Using Dynamic Provisioning and TCE or SnapShot

1 GB/CTL – – 590 MB N/A N/A

2 GB/CTL 512 MB 710 MB 1,390 MB 1,180 MB 680 MB

AMS2100

4 GB/CTL

210 MB

2 GB 2,270 MB 3,360 MB 3,150 MB 1,090 MB

1 GB/CTL – – 500 MB N/A N/A

2 GB/CTL 512 MB 850 MB 1,340 MB 1,010 MB 490 MB

4 GB/CTL 2 GB 2,370 MB 3,110 MB 2,780 MB 740 MB

AMS2300

8 GB/CTL

330 MB

4 GB 4,430 MB 6,940 MB 6,610 MB 2,510 MB

2 GB/CTL 512 MB 1,082 MB 1,090 MB N/A N/A

4 GB/CTL 1.5 GB 2,138 MB 2,780 MB N/A N/A

6 GB/CTL 3 GB 3,660 MB 4,660 MB 4,080 MB 1,000 MB

8 GB/CTL 4 GB 4,680 MB 6,440 MB 5,860 MB 1,760 MB

10 GB/CTL 5 GB 5,700 MB 8,320 MB 7,740 MB 2,620 MB

12 GB/CTL 6 GB 6,720 MB 9,980 MB 9,400 MB 3,260 MB

AMS2500

16 GB/CTL

580 MB

8 GB 8,760 MB 14,060 MB 13,480 MB 5,300 MB

Prelim

inary

Page 74: Copy on WriteSnapShotUsersGuide

62

4.4 Installing and Uninstalling SnapShot

Since SnapShot is an extra-cost option, SnapShot cannot usually be selected (locked) when first using the array. To make SnapShot available, you must install the SnapShot and make its function selectable (unlocked).

SnapShot can be installed from Navigator 2. This section describes the installation/uninstallation procedures performed by using Navigator 2 via the Graphical User Interface (GUI).

For procedures performed by using the Command Line Interface (CLI) of Navigator 2, see Chapter 7.

Note 1: Before installing/uninstalling SnapShot, verify that array is operating in a normal state. If a failure such as a controller blockade has occurred, installation/un-installation cannot be performed.

Note 2: When you perform the installing, uninstalling, enabling, or disabling of SnapShot in the case where the array is used on the remote side of TrueCopy or TCE, the following phenomena occur with the restart of the array.

The both paths of TrueCopy or TCE are blocked. When a path is blocked, a TRAP occurs, that is, a notification to the SNMP Agent Support Function. Inform the departments concerned of the above beforehand. The path of TrueCopy or TCE is recovered from the blockade automatically after the array is restarted.

When the pair status of TrueCopy or TCE is Paired or Synchronizing, it is changed to Failure.

When you restart the array necessarily, perform the installing, uninstalling, enabling, or disabling of SnapShot after changing the pair status of TrueCopy or TCE to Split.

Note 3: Notes for the case where DKN-200-NGW1 (NAS unit in short) is connected to the disk array.

– Items to be checked in advance:

Prior to this operation, if all of the following three items applies to the disk array, execute Correspondence when connecting the NAS unit.

1. NAS unit is connected to the disk array. (* 1)

2. NAS unit is in operation. (* 2)

3. A failure has not occurred on the NAS unit. (* 3)

* 1: Confirm with the disk array administrator to check whether the NAS unit is connected or not.

* 2: Confirm with the NAS unit administrator to check whether the NAS service is operating or not.

* 3: Ask the NAS unit administrator to check whether failure has occurred or not by checking with the NAS administration software, NAS Manager GUI, List of RAS Information, etc. In case of failure, execute the maintenance operation together with the NAS maintenance personal.

Prelim

inary

Page 75: Copy on WriteSnapShotUsersGuide

63

– Correspondence when connecting the NAS unit:

If the NAS unit is connected, ask the NAS unit administrator for termination of NAS OS and planned shutdown of the NAS unit.

– Points to be checked after completing this operation:

Ask the NAS unit administrator to reboot the NAS unit. After rebooting, ask the NAS unit administrator to refer to “Recovering from FC path errors” in “Hitachi NAS Manager User’s Guide” and check the status of the Fibre Channel path (FC path in short) and to recover the FC path if it is in a failure status.

In addition, if there are any personnel for the NAS unit maintenance, ask the NAS unit maintenance personnel to reboot the NAS unit.

Note 4: When SnapShot is used together with TCE, the restart of the array by the function that is installed later is not required because the restart was done by the function that was installed first in order to ensure the resource for the data pool in the cache memory.

4.4.1 Installing SnapShot

To install SnapShot, the key code or key file provided with the optional feature is required. The following describes the installation procedure:

1. Start Navigator 2.

2. Log in as registered user to Navigator 2.

3. Select the array in which you will install SnapShot.

4. Click Show & Configure Array.

5. Select the Install License icon.

The Install License screen appears.

Prelim

inary

Page 76: Copy on WriteSnapShotUsersGuide

64

6. When you install the option using the key code, click the Key Code radio button, and then set up the key code. When you install the option using the key file, click the Key File radio button, and then set up the path for the key file. Click OK.

Note: Browse is used to set the path to a key file correctly.

7. A screen appears, requesting a confirmation to install SnapShot option. Click Confirm.

8. A screen appears, completed to install SnapShot option. Click OK.

9. A message appears confirming that this optional feature is installed. Mark the check box and Click Reboot Array.

The restart is not required at this time if it is done later at the time when validating the function. However, in the case where the installation of TCE was completed before SnapShot is installed, the dialog box, which asks whether or not to do the restart, is not displayed at this time because the restart was done in order to ensure the resource for the data pool in the cache memory. When the restart is not needed, in which the settings of the onerous option(s) were updated, is displayed and the installation of the SnapShot function is completed.

If restart the array, during a period from an issue of a spin-down instruction to the completion of the spin-down when Power Saving, which is a priced option of the array, is used together, the spin down may fail because the array receives a command from a host immediately after the array restarts. When the spin-down fails, execute the spin-down again. Check that the spin-down instruction has not been issued or has been completed (no RAID group in the Power Saving Status of Normal(Command Monitoring) exists) before restart the array.

Note: To install the option, restart the array. The feature will close upon restarting the array. The array cannot access the host until the reboot is completed and the system restarts. Make sure that the host has stopped accessing data before beginning the restart process.

Prelim

inary

Page 77: Copy on WriteSnapShotUsersGuide

65

Note: After unlocking the operated option, even when the array does not reboot, it is displayed that the status of the option is enabled, but SnapShot cannot be operated until the array reboots. To use the option, reboot the array.

Restart usually takes from 4 to 15 minutes.

Note: It may take time for the array to respond, depending upon the condition of the array. If it does not respond after 15 minutes or more, check the condition of the array.

10. A message appears stating that the restart is successful. Click Close.

4.4.2 Uninstalling SnapShot

To uninstall SnapShot, the key code provided with the optional feature is required. Once uninstalled, SnapShot cannot be used again until it is installed using the key code or key file.

Note: The following conditions must be satisfied in order to uninstall SnapShot.

All SnapShot pairs must be released (the status of all LUs are Simplex).

To uninstall SnapShot:

1. Start Navigator 2.

2. Log in as registered user to Navigator 2.

3. Select the array in which you will uninstall SnapShot.

4. Click Show & Configure Array.

5. Select the Licenses icon in the Settings tree view.

Navigator 2:

Version 4.60 or higher Version 4.00 or higher Version 3.21 Prelim

inary

Page 78: Copy on WriteSnapShotUsersGuide

66

The Licenses list appears.

6. Click De-install License.

The De-Install License screen appears.

7. Enter a key code in the text box. Click OK.

8. A screen appears, requesting a confirmation to uninstall SnapShot option. Click OK.

9. A message appears confirming that this optional feature is uninstalled. Mark the check box and Click Reboot Array. Prel

imina

ry

Page 79: Copy on WriteSnapShotUsersGuide

67

The restart is not required at this time if it is done later at the time when validating the function. However, in the case where the installation of TCE was completed before SnapShot is installed, the dialog box, which asks whether or not to do the restart, is not displayed at this time because the restart was done in order to ensure the resource for the data pool in the cache memory. When the restart is not needed, in which the settings of the onerous option(s) were updated, is displayed and the installation of the SnapShot function is completed.

If restart the array, during a period from an issue of a spin-down instruction to the completion of the spin-down when Power Saving, which is a priced option of the array, is used together, the spin down may fail because the array receives a command from a host immediately after the array restarts. When the spin-down fails, execute the spin-down again. Check that the spin-down instruction has not been issued or has been completed (no RAID group in the Power Saving Status of Normal(Command Monitoring) exists) before restart the array.

Note: To uninstall the option that you have operated, restart the array. The feature will close upon restarting the array. The array cannot access the host until the reboot is completed and the system restarts. Make sure that the host has stopped accessing data before beginning the restart process.

Restart usually takes 4 to 15 minutes.

Note: It may take time for the array to respond, depending upon the condition of the array. If it does not respond after 15 minutes or more, check the condition of the array.

10. A message appears stating that the restart is successful. Click Close.

4.4.3 Enabling or Disabling SnapShot

Once SnapShot is installed, it can be enabled or disabled.

Prelim

inary

Page 80: Copy on WriteSnapShotUsersGuide

68

Note: The following conditions must be satisfied in order to disable SnapShot.

All SnapShot pairs must be released (that is, the status of all LUs are Simplex).

The following describes the enabling/disabling procedure.

1. Start Navigator 2.

2. Log in as registered user to Navigator 2.

3. Select the array in which you will set SnapShot.

4. Click Show & Configure Array.

5. Select the Licenses icon in the tree view.

6. Select the SNAPSHOT in the Licenses list.

7. Click Change Status.

The Change License screen appears.

8. To disable, uncheck the checkbox.

To enable, check the checkbox and click OK.

9. A message appears confirming that this optional feature is set. Click OK.

10. A message appears confirming that this optional feature is set. Mark the check box and Click Reboot Array.

If restart the array, during a period from an issue of a spin-down instruction to the completion of the spin-down when Power Saving, which is a priced option of the array, is used together, the spin down may fail because the array receives a command from a host immediately after the array restarts. When the spin-down fails, execute the spin-down again. Check that the spin-down instruction has not been issued or has been completed (no RAID group in the Power Saving Status of Normal(Command Monitoring) exists) before restart the array.

Prelim

inary

Page 81: Copy on WriteSnapShotUsersGuide

69

Note: To set the option that you have operated, restart the array. The feature will not set upon restarting the array. The array cannot access the host until the reboot is completed and the system restarts. Make sure that the host has stopped accessing data before beginning the restart process.

Restart usually takes 4 to 15 minutes.

Note: It may take time for the array to respond, depending upon the condition of the array. If it does not respond after 15 minutes or more, check the condition of the array.

11. A message appears, stating that the restart is successful. Click Close.

Prelim

inary

Page 82: Copy on WriteSnapShotUsersGuide

70

4.5 Operations for SnapShot Configuration

4.5.1 Setting the DMLU

The DMLU (Differential Management Logical Unit) is an exclusive logical unit for storing the differential data while the volume is being copied. The DMLU in the array is treated in the same way as the other logical units. However, a logical unit that is set as the DMLU is not recognized by a host (it is hidden).

When the DMLU is not set, it must be created. Set a logical unit with a size of 10 GB minimum as the DMLU. It is recommended that two DMLUs be set. The second one is used for the mirroring.

To designate DMLUs:

1. Select the DMLU icon in the Settings tree view.

The Differential Management Logical Units list appears.

2. Click Add DMLU.

The Add DMLU screen appears.

3. Select one or two the LUN you want to set as the DMLU and click OK.

4. A message displays. Select the checkbox, and click Confirm.

Prelim

inary

Page 83: Copy on WriteSnapShotUsersGuide

71

5. A message displays. Click Close.

To remove the designated DMLUs:

Note:

There are the following restrictions when either pair of ShadowImage, SnapShot, TrueCopy, or TCE exist, the path of TrueCopy or TCE is defined, or data pool of SnapShot or TCE is defined.

– When two DMLUs are set, only one DMLU can be removed.

– When only one DMLU is set, the DMLU cannot be removed.

1. Select the DMLU icon in the Settings tree view.

The Differential Management Logical Units list appears.

2. Select the LUN you want to remove, and click Remove DMLU.

3. A message displays. Click Close.

4.5.2 Setting Data Pool Volumes

Up to 64 data pools can be designated for each array, by assigning a logical unit that has been created and formatted. Up to 64 logical units can be assigned to each data pool. The accurate capacity of a data pool cannot be determined immediately after an LU has been assigned. Data pool capacity can only be confirmed approximately 3 minutes per 100 GB.

The following restrictions apply to LUs assigned to a data pool:

Logical units once assigned to a data pool are no longer recognized by a host.

Because data will be lost when excess over the limited value of the data pool capacity occurs, 20 GB at least is recommended as a standard data pool capacity. Incidentally, when the data pool capacity being used exceeds the threshold value (default value: usage rate of 70%), the pair in the Split status is changed to the Threshold over status.

An LU with a SAS drives, an LU with a SAS7.2K drives, an LU with an SSD drives, and an LU with a SATA drives cannot coexist in a data pool.

When using SnapShot with Cache Partition Manager, the segment size of the LU belonging to data pool must be the default size (16 kB) or less.

Prelim

inary

Page 84: Copy on WriteSnapShotUsersGuide

72

The following is the procedure for creating a data pool for storing differential data for use by SnapShot.

To designate data pool(s):

1. Select the Data Pools icon in the tree view.

The Data Pool list appears.

Navigator 2:

Version 4.60 or higher Version 4.00 or higher

2. Click Create Data Pool.

The Create Data Pool screen appears.

3. Specify the Data Pool Number and/or Threshold if necessary.

The default Threshold value is 70. Specify any value from 50 through 80.

4. Select the Added LU for data pool.

5. Click OK. Prelim

inary

Page 85: Copy on WriteSnapShotUsersGuide

73

6. A message appears. Click Close.

Note: The data pool status becomes Threshold Over at the time the use capacity of the data pool becomes Threshold +1%. If the usage of the data pool decreases after the data pool status becomes Threshold Over, the data pool status returns to Normal at the time the usage of the data pool becomes Threshold –5%.

4.5.3 Editing Data Pool Volumes

1. Select the Data Pools icon in the tree view.

2. Select a data pool you want to edit from the Data Pool list.

3. Click Edit Data Pool.

The Edit Data Pool screen appears.

4. Select the LUN for data Pool from LUs list if necessary.

5. Specify the Threshold if necessary.

6. Click OK.

7. A message appears. Click Close.

4.5.4 Deleting Data Pool Volumes

Note: When deleting the logical unit set as the data pool, it is necessary to delete all SnapShot images (V-VOLs).

1. Select the Data Pools icon in the tree view.

2. Select a data pool you want to delete in the Data Pool list.

3. Click Delete Data Pool.

4. A message appears. Click Close.

Prelim

inary

Page 86: Copy on WriteSnapShotUsersGuide

74

4.5.5 Setting V-VOLs

To create a SnapShot pair you must first set a V-VOL.

If a specification for the logical unit assigned to a V-VOL is omitted when setting the V-VOL, Navigator 2 assigns the smallest undefined number to the logical unit.

To set the V-VOL:

1. Select the SnapShot Logical Units icon in the tree view.

The SnapShot Logical Units list appears.

2. Click Create SnapShot LU.

The Create SnapShot Logical Unit screen appears.

3. Enter the LUN for the V-VOL. Enter the Capacity and select capacity unit from the drop-down list.

Note: The V-VOL capacity must be equal to P-VOL capacity.

4. Click OK.

5. A message appears, creating the V-VOL is successful. Click Close.

4.5.6 Deleting V-VOLs

Note: In order to delete the V-VOL, the pair state must be Simplex.

1. Select the SnapShot Logical Units icon in the tree view.

The SnapShot Logical Units list appears.

2. Select a V-VOL you want to delete in the SnapShot Logical Units list

3. Click Delete SnapShot LU.

4. A message appears. Click Close.

Prelim

inary

Page 87: Copy on WriteSnapShotUsersGuide

75

4.5.7 Setting the LU Ownership

Note: The load balancing function is not applied to the LUs specified as a SnapShot pair. Since the ownership of the LUs specified as a SnapShot pair is the same as the ownership of the LUs specified as a data pool, perform the setting so that the ownership of LUs specified as a data pool is balanced in advance.

To set the LU ownership:

1. Select LU Ownership icon in the Tuning Parameter tree view of the Performance tree view.

The Change Logical Unit Ownership screen appears.

2. Select Controller 0 or Controller 1 and X Core or Y Core, and then click OK.

The Core item is not displayed: AMS2100/2300

3. A message appears. Click Close. Prelim

inary

Page 88: Copy on WriteSnapShotUsersGuide

76

Prelim

inary

Page 89: Copy on WriteSnapShotUsersGuide

77

Chapter 5 Performing SnapShot GUI Operations

This section includes the following:

Operations Workflow (see section 5.1)

Pair Operations (see section 5.2)

Prelim

inary

Page 90: Copy on WriteSnapShotUsersGuide

78

5.1 Operations Workflow

Figure 5.1 shows pair operation using Navigator 2 GUI.

Figure 5.1 Pair Operations

Complete restoration

Simplex

P-VOL V-VOL

Failure

Not synchronized

P-VOL V-VOL

Paired

P-VOL V-VOL

Split

Error

Split the pair immediately after creation is completed

P-VOL V-VOL

Reverse synchronizing

Error

Create pair/Update pair

Prelim

inary

Page 91: Copy on WriteSnapShotUsersGuide

79

5.2 Pair Operations

5.2.1 Confirming Pair Status

1. Select the Local Replication icon in the Replication tree view.

Navigator 2: Version 4.60 or higher Version 4.00 or higher

The Pairs list appears.

– Pair Name: The pair name displays.

– Primary Volume: The primary volume LUN displays.

– Secondary Volume: The secondary volume LUN displays.

– Status: The pair status displays.

Simplex: A pair is not created.

Reverse Synchronizing: Update copy (reverse) is in progress.

Paired: Initial copy or update copy is completed.

Split: A pair split.

Failure: A failure occurs.

---: Other than above.

– CopyType: SnapShot or ShadowImage is displays.

– Group Number: A group number, group name, or ---:{Ungrouped} is displays.

GroupName

– Backup Time: Acquired backup time or N/A is displays.

– Split Description: A character strings is displays when specify Attach description to identify the pair upon split. When not specify Attach description to identify the pair upon split, N/A is displays.

Prelim

inary

Page 92: Copy on WriteSnapShotUsersGuide

80

5.2.2 Creating Pairs

To create the SnapShot pairs:

1. Select the Local Replication icon in the Replication tree view.

2. Click Create Pair.

The Create Pair screen appears.

3. Select the SnapShot in the CopyType.

4. Enter a Pair Name if necessary.

5. Select a primary volume and secondary volume.

Note: LUN may be different from H-LUN, witch is recognized by the host. Refer to 4.3.1 Specifying P-VOL and V-VOL when Pair Operation and confirm the mapping of LUN and H-LUN.

6. Select the Data Pool Number from the drop-down list.

7. Click the Advanced tab. Prelim

inary

Page 93: Copy on WriteSnapShotUsersGuide

81

8. Select a Copy Pace from the drop-down list.

9. Select After pair creation, add the pair to a group – Group Assignment:

– {Ungrouped}: The pair ungrouped.

– New or existing Group Number: Specify a group number from 0 to 255.

– Existing Group Name: Enter a group name.

10. If a check mark to Split the pair immediately after creation is completed, a pair split after pair creation is completed.

11. Click OK.

12. A confirmation message appears. Check the Yes, I have read the above warning and want to create the pair. check box, and click Confirm.

13. A confirmation message appears. Click Close.

Prelim

inary

Page 94: Copy on WriteSnapShotUsersGuide

82

5.2.3 Updating the V-VOL

To update the V-VOL:

Re-synchronize SnapShot pair at first.

1. Select the Local Replication icon in the Replication tree view.

2. Select the pair you want to re-synchronize the pair in the Pairs list.

3. Click Resync Pair.

A confirmation message appears.

4. Check the Yes, I have read the above warning and want to re-synchronize selected pairs. check box, and click Confirm.

5. A confirmation message appears. Click Close.

In the next step, split the re-synchronized pair.

6. Select the re-synchronized pair in the Pairs list.

7. Click Split Pair.

The SplitPair screen appears.

8. Enter a character strings to the Attach description to identify the pair upon split if necessary.

9. Click OK. Prelim

inary

Page 95: Copy on WriteSnapShotUsersGuide

83

10. A confirmation message appears. Click Close.

5.2.4 Restoring the V-VOL to the P-VOL

To restore the SnapShot pairs:

1. Select the Local Replication icon in the Replication tree view.

2. Select the pair you want to restore the pair in the Pairs list.

3. Click Restore Pair.

A confirmation message appears.

4. Check the Yes, I have read the above warning and want to restore selected pairs. check box, and click Confirm.

5. A confirmation message appears. Click Close.

5.2.5 Releasing Pairs

To release the SnapShot pairs:

1. Select the Local Replication icon in the Replication tree view.

2. Select the pair you want to release the pair in the Pairs list.

3. Click Delete Pair.

A confirmation message appears. Prelim

inary

Page 96: Copy on WriteSnapShotUsersGuide

84

4. Check the Yes, I have read the above warning and agree to delete selected pairs. check box, and click Confirm.

5. A confirmation message appears. Click Close.

5.2.6 Changing Pair Information

You can change the pair name, group name, and/or copy pace.

1. Select the Local Replication icon in the Replication tree view.

2. Select the pair you want to change the pair information in the Pairs list.

3. Click Edit Pair.

The Edit Pair screen appears.

4. Change the Pair Name, Group Name, and/or Copy Pace if necessary.

5. Click OK. Prelim

inary

Page 97: Copy on WriteSnapShotUsersGuide

85

6. A confirmation message appears. Click Close.

5.2.7 Creating Pairs that Belong to a Group

To create multiple SnapShot pairs that belong to a group:

1. Create the first pair that belongs to a group according to the sequence of 5.2.2 Creating Pairs. At the step 9 in that sequence, specify an unused group number for the new group.

The new group has been created and in this group, the new pair has been created too.

2. Add the name to the group if necessary according to the sequence of 5.2.6 Changing Pair Information.

3. Create the next pair that belongs to the created group according to the sequence of 5.2.2 Creating Pairs. At the step 9 in that sequence, specify the number of the created group. SnapShot pairs that share the same P-VOL must use same data pool.

4. By repeating the step 3, the multiple pairs that belong to the same group can be created.

Prelim

inary

Page 98: Copy on WriteSnapShotUsersGuide

86

Prelim

inary

Page 99: Copy on WriteSnapShotUsersGuide

87

Chapter 6 System Operation Example

This section includes the following:

Backup Operation for Quick Recovery (see section 6.1)

Online Backup Operation Using an Inexpensive Configuration (see section 6.2)

Restoring Backup Data (see section 6.3)

Prelim

inary

Page 100: Copy on WriteSnapShotUsersGuide

88

6.1 Backup Operation for Quick Recovery

Note the following regarding backup operations for quick recovery:

Because the V-VOL that can be used to restore can be retained 24/365, recovery can occur without doing a restore from a tape device. (However, because of the possibility of becoming Failure by overflow of data pool capacity or hardware failure, tape backup is a mandatory.)

Make two V-VOLs per P-VOL and issue SnapShot instruction every 0:00 and 12:00.

Backup onto tape at night (when few I/O instructions are issued by a host). (As a general guide, a time of the day when host I/O is below 100 IOPS.)

Backup onto tapes via two ports simultaneously.

When backing up onto tapes, the total capacity of the V-VOL concerned must be 1.5 TB or smaller.

Example: When the total V-VOL capacity is 1 TB, time required for backing up (at a speed of 100 MB/sec) is 3 hours.

Figure 6.1 Ordinarily Quick Recovery Operation

Host

CTL0 CTL1

Backup Host

Split

(Time) 8 12 18 0 2 6 8

V-VOL11 and V-VOL21 are backed up onto tape device at the time of SnapShot instruction.

P-VOL1 V-VOL11 (0:00)

P-VOL2

V-VOL12 (12:00)

500 GB 500 GB

V-VOL21(0:00)

V-VOL22(12:00)

SnapShot

V-VOL 11/21

V-VOL is always retained (SnapShot instruction is given to each at a regular time).

SnapShot Split Split

V-VOL 12/22

Split

Execute backup

100 MB/sec

Data pool 0

250 GB 250 GB

With this data pool capacity, it is impossible to restore the V-VOL from the tape device. In such case, increase each data pool capacity to 1.5 times (750 GB) of the P-VOL capacity or more.

Data pool 1

Prelim

inary

Page 101: Copy on WriteSnapShotUsersGuide

89

6.2 Online Backup Operation Using an Inexpensive Configuration

Note the following while doing an online backup operation:

A period during which the V-VOL must be retained is limited to the period required for backup to a tape device.

The configuration cost is minimal because the data pool capacity can be managed by storing only data updated during a backup to a tape device.

Back up onto a tape device at night (when few I/O instructions are issued by a host). (In general, a time of the day when host I/O is below 100 IOPS.)

Backup onto tapes via two ports simultaneously.

When backing up onto tapes, the total capacity of the V-VOL concerned must be 1.5 TB or smaller.

Example: When the total V-VOL capacity is 1 TB, time required for backing up (at a speed of 100 MB/sec) is 3 hours.

To restore from a tape device, only “Direct restoration of the P-VOL” is possible.

Figure 6.2 Ordinarily Operation

100 MB/sec

V-VOL total: 1 TB

Host

CTL0 CTL1

Backup host

Simplex

(Time) 8 12 18 23 2 6 8

P-VOL1 V-VOL1 P-VOL2

500 GB 500 GB

V-VOL2

Split

Create pair

Execute backup

Release pair

100 GB 100 GB

Data pool 0 Data pool 1

Prelim

inary

Page 102: Copy on WriteSnapShotUsersGuide

90

6.3 Restoring Backup Data

There are two ways to restore backup data. One is the restoration using backup data within the same AMS array, and the other is restoration using backup data stored in a tape device.

Restoration using backup data within the same AMS array

– This is the restoration method used for backup operation for quick recovery.

Restoration using backup data stored in a tape device (via V-VOL)

– This is a restoration method used when the V-VOL has a Failure status in backup operation for quick recovery.

– Free data pool capacity should be larger than the P-VOL capacity (recommendation is 1.5 times of the P-VOL capacity or more).

Restoration using backup data stored in a tape device (direct P-VOL)

– This is a restoration method used for the operation when the V-VOL is a Failure status in backup operation for a quick recovery.

– This is a restoration method used for the operation whose free data pool capacity is smaller than the P-VOL capacity, like online backup operation using an inexpensive configuration.

Prelim

inary

Page 103: Copy on WriteSnapShotUsersGuide

91

6.3.1 Backup Operation for Quick Recovery

This restoration method uses backup data within the same AMS array.

When a software failure (caused by a wrong operation by a user or an application program bug) occurs, perform restoration, selecting backup data you want to return from the V-VOL being retained.

Note: It is necessary to un-mount the P-VOL once before restoring the P-VOL from the V-VOL.

It is possible to restore the V-VOL data directly to the P-VOL.

Host

CTL0 CTL1

Backup host

P-VOL

Restoration

250 GB

V-VOL1 (0:00)

V-VOL2 (12:00)

500 GB

Data pool

Prelim

inary

Page 104: Copy on WriteSnapShotUsersGuide

92

6.3.2 Restoration Backup Data from a Tape Device

Return backup data stored in the tape device to the V-VOL from which the backup data was executed and restore P-VOL using V-VOL data.

Restoration via V-VOL

– Return backup data stored in the tape device to the V-VOL once and restore the P-VOL using the V-VOL data.

Notes:

When returning the backup data to the V-VOL, free data pool capacity larger than the P-VOL capacity is required. More than 1.5 times the P-VOL capacity is recommended.

It is necessary to un-mount the P-VOL once, before restoring the P-VOL from the V-VOL.

Return the backup data stored in the tape device to the V-VOL once, and restore the P-VOL using the V-VOL data.

Host

CTL0 CTL1

Backup host

P-VOL V-VOL

Restoration

750 GB

500 GB

Data pool

Direct restoration of P-VOL

– Use it when free data pool capacity is insufficient or V-VOL has entered Failure status.

– Restore data from a backup host to the P-VOL via a LAN or directly from tape device to the P-VOL.

Prelim

inary

Page 105: Copy on WriteSnapShotUsersGuide

93

Notes:

When copying the backup data from tape device onto P-VOL, lift all pairs on that P-VOL (= Simplex). If backup data on tape device is restored to paired P-VOL (= Split or Paired), then data copy will be initiated from P-VOL to data pool in order to retain data on V-VOL. This causes a drop in restore performance and open capacity of data pool has to be larger than P-VOL capacity.

It is necessary to suspend host access during restoration of P-VOL.

Restore data to the P-VOL via a LAN or directly from the tape device.

Host

CTL0 CTL1

Backup host

P-VOL

100 GB

500 GB

Data pool

Prelim

inary

Page 106: Copy on WriteSnapShotUsersGuide

94

Prelim

inary

Page 107: Copy on WriteSnapShotUsersGuide

95

Chapter 7 Operations Using CLI

This section describes the following operation procedure for SnapShot using the CLI of Navigator 2. The following sections are included:

Installing SnapShot

Operations for SnapShot Configuration

Performing SnapShot CLI Operations

Applications of CLI Commands

For details on Navigator 2, refer to the Hitachi Storage Navigator Modular 2 Command Line Interface (CLI) User’s Guide.

Prelim

inary

Page 108: Copy on WriteSnapShotUsersGuide

96

7.1 Installing SnapShot

7.1.1 Installing SnapShot

Since SnapShot is an extra-cost option, SnapShot cannot usually be selected (locked) before it can be used. To make SnapShot available, you must install the SnapShot and set its function selectable (unlock).

Note 1: Before installing/uninstalling SnapShot, verify that the array to be operated is functioning normally. If a failure such as a controller blockage has occurred, installation/un-installation cannot be performed.

Note 2: If you perform installing of SnapShot during a period from an issue of a spin-down instruction to the completion of the spin-down when Power Saving, which is a priced option of the array, is used together, the spin down may fail because the array receives a command from a host immediately after the array restarts. When the spin-down fails, execute the spin-down again. Check that the spin-down instruction has not been issued or has been completed (no RAID group in the Power Saving Status of Normal(Command Monitoring) exists) before installing of SnapShot.

Note 3: When SnapShot is used together with TCE, the restart of the array by the function that is installed later is not required because the restart was done by the function that was installed first in order to ensure the resource for the data pool in the cache memory.

Note 4: When you perform the installing, uninstalling, enabling, or disabling of SnapShot in the case where the array is used on the remote side of TrueCopy or TCE, the following phenomena occur with the restart of the array.

The both paths of TrueCopy or TCE are blocked. When a path is blocked, a TRAP occurs, that is, a notification to the SNMP Agent Support Function. Inform the departments concerned of the above beforehand. The path of TrueCopy or TCE is recovered from the blockade automatically after the array is restarted.

When the pair status of TrueCopy or TCE is Paired or Synchronizing, it is changed to Failure.

When you restart the array necessarily, perform the installing, uninstalling, enabling, or disabling of SnapShot after changing the pair status of TrueCopy or TCE to Split.

Note 5: Notes for the case where DKN-200-NGW1 (NAS unit in short) is connected to the disk array.

– Items to be checked in advance:

Prior to this operation, if all of the following three items applies to the disk array, execute Correspondence when connecting the NAS unit.

1. NAS unit is connected to the disk array. (* 1)

2. NAS unit is in operation. (* 2)

3. A failure has not occurred on the NAS unit. (* 3) Prel

imina

ry

Page 109: Copy on WriteSnapShotUsersGuide

97

* 1: Confirm with the disk array administrator to check whether the NAS unit is connected or not.

* 2: Confirm with the NAS unit administrator to check whether the NAS service is operating or not.

* 3: Ask the NAS unit administrator to check whether failure has occurred or not by checking with the NAS administration software, NAS Manager GUI, List of RAS Information, etc. In case of failure, execute the maintenance operation together with the NAS maintenance personal.

– Correspondence when connecting the NAS unit:

If the NAS unit is connected, ask the NAS unit administrator for termination of NAS OS and planned shutdown of the NAS unit.

– Points to be checked after completing this operation:

Ask the NAS unit administrator to reboot the NAS unit. After rebooting, ask the NAS unit administrator to refer to “Recovering from FC path errors” in “Hitachi NAS Manager User’s Guide” and check the status of the Fibre Channel path (FC path in short) and to recover the FC path if it is in a failure status.

In addition, if there are any personnel for the NAS unit maintenance, ask the NAS unit maintenance personnel to reboot the NAS unit.

To install SnapShot, a key code or key file provided with the optional feature is required. The following describes the installation procedure.

To install SnapShot:

1. From the command prompt, register the array in which SnapShot is to be installed, then connect to the array.

2. Execute the auopt command to install SnapShot. An example is shown below.

Example: % auopt -unit array-name –lock off -keycode manual-attached-keycode Are you sure you want to unlock the option? (y/n [n]): y The option is unlocked. In order to complete the setting, it is necessary to reboot the subsystem. Host will be unable to access the subsystem while restarting. Host applications that use the subsystem will terminate abnormally. Please stop host access before you restart the subsystem. Also, if you are logging in, the login status will be canceled when restarting b egins. When using Remote Replication, restarting the remote subsystem will cause both R emote Replication paths to fail. Remote Replication pair status will be changed to "Failure(PSUE)" when pair stat us is "Paired(PAIR)" or "Synchronizing(COPY)". Please change Remote Replication pair status to "Split(PSUS)" before restart. Do you agree with restarting? (y/n [n]): y Are you sure you want to execute? (y/n [n]): y Now restarting the subsystem. Start Time hh:mm:ss Time Required 4 - 15min. The subsystem restarted successfully. %

Note: It may take time for the array to respond, depending on the condition of the array. If it does not respond after 15 minutes, check the condition of the array.

Prelim

inary

Page 110: Copy on WriteSnapShotUsersGuide

98

3. Execute the auopt command to confirm whether SnapShot has been installed. An example is shown below.

Example: % auopt -unit array-name -refer Option Name Type Term Status SNAPSHOT Permanent --- Enable %

SnapShot is installed and the status is “Enable”. SnapShot installation is complete.

7.1.2 Uninstalling SnapShot

To uninstall SnapShot, a key code provided with the optional feature is required. Once uninstalled, SnapShot cannot be used (locked) until it is again unlocked using the key code or key file.

Note 1: The following conditions must be satisfied in order to uninstall SnapShot.

– All SnapShot pairs must be released (that is, the status of all LUs are Simplex).

– All data pools must be deleted.

– All SnapShot Images (V-VOL) must be deleted.

Note 2: If you perform uninstalling of SnapShot during a period from an issue of a spin-down instruction to the completion of the spin-down when Power Saving, which is a priced option of the array, is used together, the spin down may fail because the array receives a command from a host immediately after the array restarts. When the spin-down fails, execute the spin-down again. Check that the spin-down instruction has not been issued or has been completed (no RAID group in the Power Saving Status of Normal(Command Monitoring) exists) before uninstalling of SnapShot.

The following describes the un-installation procedure.

To uninstall SnapShot:

1. From the command prompt, register the array in which the SnapShot is to be uninstalled, then connect to the array.

2. Execute the auopt command to uninstall SnapShot. An example is shown below.

Example: Prel

imina

ry

Page 111: Copy on WriteSnapShotUsersGuide

99

% auopt -unit array-name -lock on -keycode manual-attached-keycode Are you sure you want to lock the option? (y/n [n]): y The option is locked. In order to complete the setting, it is necessary to reboot the subsystem. Host will be unable to access the subsystem while restarting. Host applications that use the subsystem will terminate abnormally. Please stop host access before you restart the subsystem. Also, if you are logging in, the login status will be canceled when restarting b egins. When using Remote Replication, restarting the remote subsystem will cause both R emote Replication paths to fail. Remote Replication pair status will be changed to "Failure(PSUE)" when pair stat us is "Paired(PAIR)" or "Synchronizing(COPY)". Please change Remote Replication pair status to "Split(PSUS)" before restart. Do you agree with restarting? (y/n [n]): y Are you sure you want to execute? (y/n [n]): y Now restarting the subsystem. Start Time hh:mm:ss Time Required 4 - 15min. The subsystem restarted successfully. %

Note: It may take time for the array to respond, depending on the condition of the array. If it does not respond after 15 minutes, check the condition of the array.

3. Execute the auopt command to confirm whether SnapShot has been uninstalled. An example is shown below.

Example: % auopt –unit array-name –refer DMEC002015: No information displayed. %

SnapShot uninstall is complete.

7.1.3 Enabling or Disabling SnapShot

Once installed, SnapShot can be enabled or disabled.

Note 1: The following conditions must be satisfied in order to disable SnapShot.

– All SnapShot pairs must be released (that is, the status of all LUs are Simplex).

– All data pools must be deleted.

– All SnapShot Images (V-VOL) must be deleted.

Note 2: If you perform disabling or enabling of SnapShot during a period from an issue of a spin-down instruction to the completion of the spin-down when Power Saving, which is a priced option of the array, is used together, the spin down may fail because the array receives a command from a host immediately after the array restarts. When the spin-down fails, execute the spin-down again. Check that the spin-down instruction has not been issued or has been completed (no RAID group in the Power Saving Status of Normal(Command Monitoring) exists) before disabling or enabling of SnapShot.

The following describes the enabling/disabling procedure.

1. From the command prompt, register the array in which the status of the feature is to be changed, then connect to the array.

Prelim

inary

Page 112: Copy on WriteSnapShotUsersGuide

100

2. Execute the auopt to change the status (enable or disable).

Following is an example of changing the status from enable to disable. If you want to change the status from disable to enable, enter enable after the -st option. Example: % auopt -unit array-name -option SNAPSHOT -st disable Are you sure you want to disable the option? (y/n [n]): y The option has been set successfully. In order to complete the setting, it is necessary to reboot the subsystem. Host will be unable to access the subsystem while restarting. Host applications that use the subsystem will terminate abnormally. Please stop host access before you restart the subsystem. Also, if you are logging in, the login status will be canceled when restarting b egins. When using Remote Replication, restarting the remote subsystem will cause both R emote Replication paths to fail. Remote Replication pair status will be changed to "Failure(PSUE)" when pair stat us is "Paired(PAIR)" or "Synchronizing(COPY)". Please change Remote Replication pair status to "Split(PSUS)" before restart. Do you agree with restarting? (y/n [n]): y Are you sure you want to execute? (y/n [n]): y Now restarting the subsystem. Start Time hh:mm:ss Time Required 4 - 15min. The subsystem restarted successfully. %

Note: It may take time for the array to respond, depending on the condition of the array. If it does not respond after 15 minutes, check the condition of the array.

3. Execute auopt to confirm whether the status has been changed. An example is shown below. Example: % auopt -unit array-name -refer Option Name Type Term Status SNAPSHOT Permanent --- Disable %

SnapShot Enable/Disable is complete.

Prelim

inary

Page 113: Copy on WriteSnapShotUsersGuide

101

7.2 Operations for SnapShot Configuration

7.2.1 Setting the DMLU

The DMLU (Differential Management Logical Unit) is an exclusive logical unit for storing the differential data while the volume is being copied. The DMLU in the array is treated in the same way as the other logical units. However, a logical unit that is set as the DMLU is not recognized by a host (it is hidden).

When the DMLU is not set, it must be created. Set a logical unit with a size of 10 GB minimum as the DMLU. It is recommended that two DMLUs be set. The second one is used for the mirroring.

To designate DMLUs:

1. From the command prompt, register the array to which you want to create the DMLU. Connect to the array.

2. Execute the audmlu command to create a DMLU.

This command first displays LUs that can be assigned as DMLUs and later creates a DMLU.

Example: % audmlu –unit array-name –availablelist Available Logical Units LUN Capacity RAID Group DP Pool RAID Level Type Status 0 10.0 GB 0 N/A 5( 4D+1P) SAS Normal % % audmlu –unit array-name –set -lu 0 Are you sure you want to set the DM-LU? (y/n [n]): y The DM-LU has been set successfully. %

3. To release an already set DMLU, specify the –rm and –lu options in the audmlu command.

Example: % audmlu –unit array-name –rm -lu 0 Are you sure you want to release the DM-LU? (y/n [n]): y The DM-LU has been released successfully. %

The following restrictions apply when either pair of ShadowImage, SnapShot, TrueCopy, or TCE exist, the path of TrueCopy or TCE is defined, or data pool of SnapShot or TCE is defined.

– When two DMLUs are set, only one differential management LU can be released.

– When only one DMLU is set, the DMLU cannot be released.

7.2.2 Setting the Data Pool

Up to 64 data pools can be designated for each array, by assigning a logical unit that has been created and formatted. Up to 64 logical units can be assigned to each data pool. The

Prelim

inary

Page 114: Copy on WriteSnapShotUsersGuide

102

accurate capacity of a data pool cannot be determined immediately after an LU has been assigned. Data pool capacity can only be confirmed approximately 3 minutes per 100 GB.

The following restrictions apply to LUs assigned to a data pool:

Logical units once assigned to a data pool are no longer recognized by a host.

Because data will be lost when excess over the limited value of the data pool capacity occurs, 20 GB at least is recommended as a standard data pool capacity. Incidentally, when the data pool capacity being used exceeds the threshold value (default value: usage rate of 70%), the pair in the Split status is changed to the Threshold over status.

An LU with a SAS drive, an LU with a SAS7.2K drive, and an LU with a SATA drive cannot coexist in a data pool.

The following is the procedure for creating a data pool for storing differential data for use by SnapShot.

To designate data pool(s):

1. From the command prompt, register the array to which you want to create the data pool, then connect to the array.

2. Execute the aupool command create a data pool.

First, display the LUs to be assigned to a data pool, and then create a data pool.

The following is the example of specifying LU 100 for data pool 0.

Example: % aupool –unit array-name –availablelist –poolno 0 Data Pool : 0 Available Logical Units LUN Capacity RAID Group DP Pool RAID Level Type Status 100 30.0 GB 0 N/A 6( 9D+2P) SAS Normal 200 35.0 GB 0 N/A 6( 9D+2P) SAS Normal % % aupool –unit array-name –add –poolno 0 -lu 100 Are you sure you want to add the logical unit(s) to the data pool 0? (y/n [n]): y The logical unit has been successfully added. %

3. Execute the aupool command to verify that the data pool has been created. Refer to the following example.

Example: % aupool –unit array-name –refer -poolno 0 Data Pool : 0 Data Pool Usage Rate: 6% (2.0/30.0 GB) Threshold : 70% Usage Status : Normal LUN Capacity RAID Group DP Pool RAID Level Type Status 100 30.0 GB 0 N/A 6( 9D+2P) SAS Normal %

4. To delete an existing data pool, refer to the following example.

This is an example of deleting data pool 0.

Prelim

inary

Page 115: Copy on WriteSnapShotUsersGuide

103

Note: When deleting the logical unit set as the data pool, it is necessary to delete all SnapShot images (V-VOLs).

Example: % aupool –unit array-name –rm -poolno 0 Are you sure you want to delete all logical units from the data pool 0? (y/n [n]): y The logical units have been successfully deleted. %

5. To change an existing threshold value for a data pool, refer to the following example.

An example of changing data pool 0:

Example: % aupool –unit array-name –cng -poolno 0 -thres 70 Are you sure you want to change the threshold of usage rate in the data pool? (y /n [n]): y The threshold of the data pool usage rate has been successfully changed. %

7.2.3 Setting the V-VOL

To create a SnapShot pair you must first set a V-VOL.

If a specification for the logical unit assigned to a V-VOL is omitted when setting the V-VOL, Navigator 2 assigns the smallest undefined number to the logical unit.

To set the V-VOL:

1. From the command prompt, register the array to which you want to set the V-VOL, then connect to the array.

2. Execute the aureplicationvvol command create a V-VOL.

Example: % aureplicationvvol –unit array-name –add –lu 1000 –size 1 Are you sure you want to create the SnapShot logical unit 1000? (y/n [n]): y The SnapShot logical unit has been successfully created. %

3. To delete an existing SnapShot logical unit, refer to the following example.

This is an example of deleting SnapShot logical unit 1000.

Note: To delete the V-VOL, the pair states of that V-VOL needs to be Simplex.

Example: % aureplicationvvol –unit array-name –rm -lu 1000 Are you sure you want to delete the SnapShot logical unit 1000? (y/n [n]): y The SnapShot logical unit has been successfully deleted. % Prel

imina

ry

Page 116: Copy on WriteSnapShotUsersGuide

104

7.2.4 Setting the LU Ownership

Note: The load balancing function is not applied to the LUs specified as a SnapShot pair. Since the ownership of the LUs specified as a SnapShot pair is the same as the ownership of the LUs specified as a data pool, perform the setting so that the ownership of LUs specified as a data pool is balanced in advance.

The procedure to setting LU ownership by CLI is shown below:

1. From the command prompt, register the array to which you want to set the LU ownership, and then connect to the array.

2. Execute the autuningluown command to confirm an LU ownership.

Example: % autuningluown –unit array-name –refer LU CTL Core RAID Group DP Pool Cache Partition Type 0 0 X 0 N/A 0 SAS 1 1 X 0 N/A 0 SAS 1000 0 X 0 N/A 0 SAS 1001 1 X 0 N/A 0 SAS 2000 0 X 0 N/A 0 SAS 2001 1 Y 0 N/A 0 SAS %

The Core shows N/A: AMS2100/2300

3. Execute the autuningluown command to change the LU 2001 ownership.

Example: % autuningluown –unit array-name –set –lu 2001 –ctl0 -coreX Are you sure you want to set the LU ownership? (y/n [n]): y The LU ownership has been set successfully. %

4. Execute the autuningluown command to confirm an LU ownership.

Example: % autuningluown –unit array-name –refer LU CTL Core RAID Group DP Pool Cache Partition Type 0 0 X 0 N/A 0 SAS 1 1 X 0 N/A 0 SAS 1000 0 X 0 N/A 0 SAS 1001 1 X 0 N/A 0 SAS 2000 0 X 0 N/A 0 SAS 2001 0 X 0 N/A 0 SAS % Prel

imina

ry

Page 117: Copy on WriteSnapShotUsersGuide

105

7.3 Performing SnapShot CLI Operations

The aureplicationlocal command operates SnapShot pair. To refer the aureplicationlocal command and its options, type in aureplicationlocal –help at the command prompt.

7.3.1 Creating SnapShot Pairs

To create SnapShot pairs:

1. From the command prompt, register the array to which you want to create the SnapShot pair, then connect to the array.

2. Execute the aureplicationlocal command create a pair.

First, display the LUs to be assigned to a P-VOL, and then create a pair.

Example: % aureplicationlocal –unit array-name –ss –availablelist –pvol Available Logical Units LUN Capacity RAID Group DP Pool RAID Level Type Status 100 30.0 GB 0 N/A 6( 9D+2P) SAS Normal 200 35.0 GB 0 N/A 6( 9D+2P) SAS Normal % % aureplicationlocal –unit array-name –ss –create –pvol 200 –svol 1001 –compsplit Are you sure you want to create pair “SS_LU0200_LU1001”? (y/n [n]): y The pair has been created successfully. %

3. Execute the aureplicationlocal command to verify that the pair has been created. Refer to the following example.

Example: % aureplicationlocal –unit array-name –ss –refer Pair name LUN Pair LUN Status Copy Type Group SS_LU0200_LU1001 200 1001 Split(100%) SnapShot ---:Ungrouped %

The SnapShot pair is created.

Prelim

inary

Page 118: Copy on WriteSnapShotUsersGuide

106

7.3.2 Updating SnapShot Logical Unit

To update the V-VOL:

1. From the command prompt, register the array to which you want to update the SnapShot pair, then connect to the array.

2. Execute the aureplicationlocal command update the pair.

Change the Split status of the SnapShot pair to Paired status using -resync option. Then, change the status to Split using -split option.

Example: % aureplicationlocal –unit array-name –ss –resync –pvol 200 –svol 1001 Are you sure you want to re-synchronize pair? (y/n [n]): y The re-synchonizing of pair has been required. % % aureplicationlocal –unit array-name –ss –split –pvol 200 –svol 1001 Are you sure you want to split pair? (y/n [n]): y The split of pair has been required. %

3. Execute aureplicationlocal to update the pair.

Example: % aureplicationlocal –unit array-name –ss –refer Pair name LUN Pair LUN Status Copy Type Group SS_LU0200_LU1001 200 1001 Split(100%) SnapShot ---:Ungrouped %

The V-VOL was updated.

Prelim

inary

Page 119: Copy on WriteSnapShotUsersGuide

107

7.3.3 Restoring V-VOL to P-VOL

To restore the V-VOL to the P-VOL:

1. From the command prompt, register the array to which you want to restore the SnapShot pair, then connect to the array.

2. Execute the aureplicationlocal command restore the pair.

First, display the pair status, and then restore the pair.

Example: % aureplicationlocal –unit array-name –ss –refer Pair name LUN Pair LUN Status Copy Type Group SS_LU0200_LU1001 200 1001 Split(100%) SnapShot ---:Ungrouped % % aureplicationlocal –unit array-name –ss –restore –pvol 200 –svol 1001 Are you sure you want to restore pair? (y/n [n]): y The restoring of pair has been required. %

3. Execute aureplicationlocal to restore the pair.

Example: % aureplicationlocal –unit array-name –ss –refer Pair name LUN Pair LUN Status Copy Type Group SS_LU0200_LU1001 200 1001 Paired( 40%) SnapShot ---:Ungrouped %

V-VOL to P-VOL is restored.

Prelim

inary

Page 120: Copy on WriteSnapShotUsersGuide

108

7.3.4 Releasing SnapShot Pairs

To release the SnapShot pair and change the status to Simplex:

1. From the command prompt, register the array to which you want to release the SnapShot pair, then connect to the array.

2. Execute the aureplicationlocal command release the pair.

Example: % aureplicationlocal –unit array-name –ss –simplex –pvol 200 –svol 1001 Are you sure you want to release pair? (y/n [n]): y The pair has been released successfully. %

3. Execute aureplicationlocal to release the pair.

Example: % aureplicationlocal –unit array-name –ss –refer DMEC002015: No information is displayed. %

The SnapShot pair is released.

Prelim

inary

Page 121: Copy on WriteSnapShotUsersGuide

109

7.3.5 Changing Pair Information

You can change the pair name, group name, and/or copy pace.

1. From the command prompt, register the array to which you want to change the SnapShot pair information, then connect to the array.

2. Execute the aureplicationlocal command change the pair information.

This is an example of changing a copy pace.

Example: % aureplicationlocal –unit array-name –ss –chg –pace slow –pvol 200 –svol 1001 Are you sure you want to change pair information? (y/n [n]): y The pair information has been changed successfully. %

The SnapShot pair information is changed.

Prelim

inary

Page 122: Copy on WriteSnapShotUsersGuide

110

7.3.6 Creating Pairs that Belong to a Group

To create multiple SnapShot pairs that belong to a group:

1. Create the first pair that belongs to a group specifying an unused group number for the new group with the –gno option. The new group has been created and in this group, the new pair has been created too.

Example: % aureplicationlocal –unit array-name –ss –create –pvol 200 –svol 1001 –gno 20 Are you sure you want to create pair “SS_LU0200_LU1001”? (y/n [n]): y The pair has been created successfully. %

2. Add the name to the group if necessary using command to change the pair information.

Example: % aureplicationlocal –unit array-name –chg –gno 20 -newgname group-name Are you sure you want to change pair information? (y/n [n]): y The pair information has been changed successfully. %

3. Create the next pair that belongs to the created group specifying the number of the created group with –gno option.

SnapShot pairs that share the same P-VOL must use same data pool.

4. By repeating the step 3, the multiple pairs that belong to the same group can be created.

Prelim

inary

Page 123: Copy on WriteSnapShotUsersGuide

111

7.4 Applications of CLI Commands

This section provides sample script to backup a volume that can be performed using Navigator 2 CLI commands.

Example: A script for backup in the case of Windows host echo off REM Specify the registered name of the arrays set UNITNAME=Array1 REM Specify the group name (Specify “Ungroup” if the pair doesn’t belong to any group) set G_NAME=Ungrouped REM Specify the pair name set P_NAME=SS_LU0001_LU0002 REM Specify the directory path that is mount point of P-VOL and V-VOL set MAINDIR=C:\main set BACKUPDIR=C:\backup REM Specify GUID of P-VOL and V-VOL PVOL_GUID=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx SVOL_GUID=yyyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyyy REM Unmounting the V-VOL pairdisplay -x umount %BACKUPDIR% REM Re-synchoronizeing pair (Updating the backup data) aureplicationlocal -unit %UNITNAME% -ss -resync -pairname %P_NAME% -gname %G_NAME% aureplicationmon -unit %UNITNAME% -evwait -ss -pairname %P_NAME% -gname %G_NAME% -st paired –pvol REM Unmounting the P-VOL pairdisplay -x umount %MAINDIR% REM Splitting pair (Determine the backup data) aureplicationlocal -unit %UNITNAME% -ss -split -pairname %P_NAME% -gname %G_NAME% aureplicationmon -unit %UNITNAME% -evwait -ss -pairname %P_NAME% -gname %G_NAME% -st split –pvol REM Mounting the P-VOL pairdisplay -x mount %MAINDIR% Volume{%PVOL_GUID%} REM Mounting the V-VOL pairdisplay -x mount %BACKUPDIR% Volume{%SVOL_GUID%} < The procedure of data copy from C:\backup to backup appliance>

Note: In case Windows 2000 or Windows Server 2003/Windows Server 2008 is used, mount command of CCI must be used when mounting/un-mounting a volume. The GUID, which is displayed by mountvol command, is needed as argument to use mount command of CCI. For more detail about mount command, see the Hitachi Adaptable Modular Storage Command Control Interface (CCI) Reference Guide.

Prelim

inary

Page 124: Copy on WriteSnapShotUsersGuide

Prelim

inary

Page 125: Copy on WriteSnapShotUsersGuide

113

Chapter 8 Operations Using CCI

This chapter provides examples of SnapShot commands using the Windows host system.

To execute SnapShot commands, from the host where CCI is installed, display the command prompt.

This chapter contains the following:

Preparing for CCI Operations (see section 8.1)

Creating the Configuration Definition File (see section 8.2)

Setting the Environment Variable (see section 8.3)

Performing SnapShot Operations (see section 8.4)

Note about Confirm Pairs by Navigator 2 (see section 8.5)

Prelim

inary

Page 126: Copy on WriteSnapShotUsersGuide

114

8.1 Preparing for CCI Operations

You must set the command device information and mapping information.

8.1.1 Setting the Command Device

The Command Device is a user-selected, dedicated logical volume on the array, which functions as the interface to the CCI software. The SnapShot commands are issued by CCI (HORCM) to the array Command Device.

A Command Device must be designated in order to issue SnapShot commands. The Command Device must be defined in the HORCM_CMD section of the configuration definition file for the CCI instance on the attached host. Up to 128 Command Devices can be designated for the array. You can designate Command Devices using Navigator 2.

Notes:

LUs set for Command Devices must be recognized by the host. The Command Device LU size must be greater than or equal to 33 MB.

To designate Command Device(s):

1. From the command prompt, register the array to which you want to create the Command Device. Connect to the array.

2. Execute the aucmddev command to create a Command Device.

The following is an example of specifying LU 200 for Command Device 1.

First, display the LUs to be assigned to the Command Device; then, create a Command Device.

To use the protection function of CCI, enter enable following the –dev option.

Example: % aucmddev –unit array-name –availablelist Available Logical Units LUN Capacity RAID Group DP Pool RAID Level Type Status 200 35.0 MB 0 N/A 6( 9D+2P) SAS Normal 201 35.0 MB 0 N/A 6( 9D+2P) SAS Normal % % aucmddev –unit array-name –set –dev 1 200 Are you sure you want to set the command devices? (y/n [n]): y The command devices have been set successfully. %

3. Execute the aucmddev command to verify that the Command Device has been created.

The following shows the example.

Note: To set the alternate Command Device function or to avoid data loss and array downtime, designate two or more Command Devices. For details on alternate Command Device function, refer to the Hitachi Adaptable Modular Storage Command Control Interface (CCI) User’s Guide.

Example:

Prelim

inary

Page 127: Copy on WriteSnapShotUsersGuide

115

% aucmddev –unit array-name –refer Command Device LUN RAID Manager Protect 1 200 Disable %

4. To release an already set Command Device, specify as follows.

Following is an example of releasing Command Device 1.

Example: % aucmddev –unit array-name –rm –dev 1 Are you sure you want to release the command devices? (y/n [n]): y This operation may cause the CCI, which is accessing to this command device, to freeze. Please make sure to stop the CCI, which is accessing to this command device, bef ore performing this operation. Are you sure you want to release the command devices? (y/n [n]): y The specified command device will be released. Are you sure you want to execute? (y/n [n]): y The command devices have been released successfully. %

5. To change an already set Command Device, release the already set Command Device first, then change the LU number. Following is an example of specifying LU 201 for Command Device 1.

Example: % aucmddev –unit array-name –set –dev 1 201 Are you sure you want to set the command devices? (y/n [n]): y The command devices have been set successfully. %

Prelim

inary

Page 128: Copy on WriteSnapShotUsersGuide

116

8.1.2 Setting Mapping Information

Following is the procedure to specify Mapping Information. The Mapping Information is specified using Navigator 2.

Note: If there is no mapping is done on the P-VOLs and V-VOLs specified in the CCI configuration files when the Mapping mode is enabled, which means the hosts cannot recognize the P-VOLs and V-VOLs, no pair operation can be done on the P-VOLs and V-VOLs. Use LUN Manager if you want to hide the volumes from the hosts.

Note: For iSCSI model, use an autargetmap command instead of an auhgmap command.

1. From the command prompt, register the array to which you want to set the Mapping Information, then connect to the array.

2. Execute the auhgmap command to set the Mapping Information. The following is an example of setting the LU 0 in the array to be recognized as 6 by the host. The port is connected via target group 0 of port 0A on controller 0.

Example: % auhgmap -unit array-name -add 0 A 0 6 0 Are you sure you want to add the mapping information? (y/n [n]): y The mapping information has been set successfully. %

Prelim

inary

Page 129: Copy on WriteSnapShotUsersGuide

117

8.2 Creating the Configuration Definition File

This section includes an example that shows how to create pairs of three V-VOLs from one P-VOL.

The configuration definition file describes the system configuration in order to make CCI operational. The configuration definition file is a text file created and/or edited using any standard text editor and can be defined from the PC where the CCI software is installed. This sample configuration definition file (HORCM_CONF) is included with the CCI software and this file should be used as the basis for creating your configuration definition file(s). The system administrator should copy the sample file, set the necessary parameters in the copied file, and place the copied file in the proper directory. For details on configuration definition file, refer to the Hitachi Adaptable Modular Storage Command Control Interface (CCI) User’s Guide.

The configuration definition file can be automatically created using the mkconf command tool. For details on the mkconf command, refer to the Hitachi Adaptable Modular Storage Command Control Interface (CCI) Reference Guide.

The following describes an example for manually defining the configuration definition file, when the system is configured with two instances within the same host.

The P-VOL and V-VOLs are conceptually diagrammed in the following figure.

P-VOL V-VOL

LUN: 2 LUN: 3

LUN: 4

LUN: 5

1. On the host where CCI is installed, verify that the CCI is not running. If the CCI software is still running, shut down the CCI software using horcmshutdown.

2. In the command prompt, make two copies of the sample file (horcm.conf).

Example: c:\HORCM\etc> copy \HORCM\etc\horcm.conf \WINDOWS\horcm0.conf c:\HORCM\etc> copy \HORCM\etc\horcm.conf \WINDOWS\horcm1.conf

3. Open horcm0.conf using the text editor.

4. In the HORCM_MON section, set the necessary parameters.

Important: A value more than or equal to 6000 must be set for poll(10ms). Refer to the Hitachi Adaptable Modular Storage Command Control Interface (CCI) User’s Guide for details on calculating the poll(10ms) value. Specifying the value incorrectly may cause resource contention in the internal process; the process is temporarily suspended and pauses array internal processing. For more details on configuration parameters, refer to the Hitachi Adaptable Modular Storage Command Control Interface (CCI) User’s Guide.

Prelim

inary

Page 130: Copy on WriteSnapShotUsersGuide

118

5. In the HORCM_CMD section, specify the array’s physical drive (Command Device).

Example: HORCM_MON #ip_address service poll(10ms) timeout(10ms) xxxxxxxxxx 5000 12000 3000 HORCM_CMD #dev_name dev_name dev_name \\.\CMD-85000123-200-CL1-A HORCM_LDEV #dev_group dev_name Serial# CU:LDEV(LDEV#) MU# VG01 oradb1 85000123 02 0 VG01 oradb2 85000123 02 1 VG01 oradb3 85000123 02 2 HORCM_INST #dev_group ip_address service VG01 xxxxxxxxxx 5001

6. In the HORCM_LDEV section, set the necessary parameters.

7. In the HORCM_INST section, set the necessary parameters.

8. Save the configuration definition file.

9. Repeat step 3 through 8 for the horcm1.conf file (see blow).

Example: HORCM_MON #ip_address service poll(10ms) timeout(10ms) xxxxxxxxxx 5001 12000 3000 HORCM_CMD #dev_name dev_name dev_name \\.\CMD-85000123-200-CL1-A HORCM_LDEV #dev_group dev_name Serial# CU:LDEV(LDEV#) MU# VG01 oradb1 85000123 03 0 VG01 oradb2 85000123 04 0 VG01 oradb3 85000123 05 0 HORCM_INST #dev_group ip_address service VG01 xxxxxxxxxx 5000

10. Enter the following in the command prompt to verify the connection between CCI and the array.

Example: Prel

imina

ry

Page 131: Copy on WriteSnapShotUsersGuide

119

C:\>cd HORCM\etc C:\HORCM\etc>echo hd1-7 | .\inqraid Harddisk 1 -> [ST] CL1-A Ser =85000123 LDEV = 200 [HITACHI ] [DF600F-CM ] Harddisk 2 -> [ST] CL1-A Ser =85000123 LDEV = 2 [HITACHI ] [DF600F ] HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = NONE MU#2 = NONE] RAID5[Group 2- 0] SSID = 0x0000 Harddisk 3 -> [ST] CL1-A Ser =85000123 LDEV = 3 [HITACHI ] [DF600F ] HORC = SMPL HOMRCF[MU#0 = SMPL MU#1 = NONE MU#2 = NONE] RAID5[Group 3- 0] SSID = 0x0000 Harddisk 4 -> [ST] CL1-A Ser =85000123 LDEV = 2 [HITACHI ] [DF600F ] HORC = SMPL HOMRCF[MU#0 = NONE MU#1 = SMPL MU#2 = NONE] RAID5[Group 2- 1] SSID = 0x0000 Harddisk 5 -> [ST] CL1-A Ser =85000123 LDEV = 4 [HITACHI ] [DF600F ] HORC = SMPL HOMRCF[MU#0 = NONE MU#1 = SMPL MU#2 = NONE] RAID5[Group 4- 0] SSID = 0x0000 Harddisk 6 -> [ST] CL1-A Ser =85000123 LDEV = 2 [HITACHI ] [DF600F ] HORC = SMPL HOMRCF[MU#0 = NONE MU#1 = NONE MU#2 = SMPL] RAID5[Group 2- 2] SSID = 0x0000 Harddisk 7 -> [ST] CL1-A Ser =85000123 LDEV = 5 [HITACHI ] [DF600F ] HORC = SMPL HOMRCF[MU#0 = NONE MU#1 = NONE MU#2 = SMPL] RAID5[Group 5- 0] SSID = 0x0000 C:\HORCM\etc>

For details on the configuration definition file, refer to the Hitachi Adaptable Modular Storage Command Control Interface (CCI) User’s Guide.

Prelim

inary

Page 132: Copy on WriteSnapShotUsersGuide

120

8.3 Setting the Environment Variable

To perform SnapShot operations, you must set the environment variable. An example of the system configuration with two instances within the same host follows.

1. Set the environment variable for each instance. Enter the following from the command prompt.

Example: C:\HORCM\etc>set HORCMINST=0

2. To use SnapShot, you must set the environment variable shown below.

Example: C:\HORCM\etc>set HORCC_MRCF=1

3. Execute the horcmstart script and execute pairdisplay to verify the configuration.

Example: C:\HORCM\etc>horcmstart 0 1 starting HORCM inst 0 HORCM inst 0 starts successfully. starting HORCM inst 1 HORCM inst 1 starts successfully. C:\HORCM\etc>pairdisplay -g VG01 Group PairVOL(L/R) (Port#,TID,LU-M) ,Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M VG01 oradb1(L) (CL1-A , 1, 2-0 )85000123 2.SMPL ----,----- ---- - VG01 oradb1(R) (CL1-A , 1, 3-0 )85000123 3.SMPL ----,----- ---- - VG01 oradb2(L) (CL1-A , 1, 2-1 )85000123 2.SMPL ----,----- ---- - VG01 oradb2(R) (CL1-A , 1, 4-0 )85000123 4.SMPL ----,----- ---- - VG01 oradb3(L) (CL1-A , 1, 2-2 )85000123 2.SMPL ----,----- ---- - VG01 oradb3(R) (CL1-A , 1, 5-0 )85000123 5.SMPL ----,----- ---- -

Prelim

inary

Page 133: Copy on WriteSnapShotUsersGuide

121

8.4 Performing SnapShot Operations

Figure 8.1 shows pair operation using CCI.

Figure 8.1 SnapShot Pair Status Transitions

Complete restoration

SMPL

paircreate

pairsplit -S

pairsplit -S

P-VOL V-VOL

PSUE

Not synchronized

P-VOL V-VOL

PAIR

P-VOL V-VOL

PSUS

Error

pairsplit pairresync

paircreate -split

pairresync -restore

P-VOL V-VOL

COPY(RS-R)

pairsplit -S

Error

Create pair/Update pair

Prelim

inary

Page 134: Copy on WriteSnapShotUsersGuide

122

8.4.1 Confirming Pair Status

Table 8.1 shows correspondence of the pair status shown by CCI and Navigator 2.

Table 8.1 Pair Status

Description CCI Navigator 2

Status where a pair is not created. SMPL Simplex

Status that exists in order to give interchangeability with ShadowImage. PAIR Paired

Status in which the backup data retained in the V-VOL is being restored to the P-VOL.

RCPY Reverse Synchronizing

Status in which the P-VOL data at the time of the SnapShot instruction is retained in the V-VOL.

PSUS/SSUS Split

Status in which the used rate of data pool reaches the threshold of data pool.

PFUS Threshold Over

Status that suspends copying forcibly when a failure occurs. PSUE Failure

To confirm SnapShot pairs:

1. For example, if the group name in the configuration definition file is VG01, follow these steps:

Execute the pairdisplay command to verify the pair status and the configuration.

Example: C:\HORCM\etc>pairdisplay -g VG01 Group PairVOL(L/R) (Port#,TID,LU-M) ,Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M VG01 oradb1(L) (CL1-A , 1, 2-0 )85000123 2.P-VOL PSUS,----- ---- - VG01 oradb1(R) (CL1-A , 1, 3-0 )85000123 3.S-VOL SSUS,----- ---- -

The pair status is displayed. For details on the pairdisplay command and its options, refer to the Hitachi Adaptable Modular Storage Command Control Interface (CCI) Reference Guide.

8.4.2 Paircreate Operation

To create SnapShot pairs:

1. If the group name in the configuration definition file is VG01, follow these steps:

Execute pairdisplay to verify that the status of the SnapShot volumes is SMPL (see section 8.3).

2. Execute paircreate. Then, execute pairevtwait to verify that the status of each volume is PSUS.

Example: C:\HORCM\etc>paircreate -split -g VG01 -d oradb1 –vl C:\HORCM\etc>pairevtwait -g VG01 -s psus -t 300 10 pairevtwait : Wait status done.

Prelim

inary

Page 135: Copy on WriteSnapShotUsersGuide

123

3. Execute pairdisplay to verify the pair status and the configuration.

Example: C:\HORCM\etc>pairdisplay -g VG01 Group PairVOL(L/R) (Port#,TID,LU-M) ,Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M VG01 oradb1(L) (CL1-A , 1, 2-0 )85000123 2.P-VOL PSUS,----- ---- - VG01 oradb1(R) (CL1-A , 1, 3-0 )85000123 3.S-VOL SSUS,----- ---- - VG01 oradb2(L) (CL1-A , 1, 2-1 )85000123 2.SMPL ----,----- ---- - VG01 oradb2(R) (CL1-A , 1, 4-0 )85000123 4.SMPL ----,----- ---- - VG01 oradb3(L) (CL1-A , 1, 2-2 )85000123 2.SMPL ----,----- ---- - VG01 oradb3(R) (CL1-A , 1, 5-0 )85000123 5.SMPL ----,----- ---- -

To assure that the data of two or more SnapShot images included in a group are of the same time, the CTG is used. The method of creating a pair using the CTG is explained below.

1. If the group name in the configuration definition file is VG01, follow these steps:

Execute pairdisplay to verify that the status of the SnapShot volumes is SMPL (see section 8.3).

2. Execute paircreate -m grp. Then, execute pairevtwait to verify that the status of each volume is PAIR.

Example: C:\HORCM\etc>paircreate -g VG01 -vl -m grp C:\HORCM\etc>pairevtwait -g VG01 -s pair -t 300 10 pairevtwait : Wait status done.

3. Next, execute pairsplit. Then, execute pairevtwait to verify that the status of each volume is PSUS.

Example: C:\HORCM\etc>pairsplit -g VG01 C:\HORCM\etc>pairevtwait -g VG01 -s psus -t 300 10 pairevtwait : Wait status done.

4. Execute pairdisplay to verify the pair status and the configuration.

Example: C:\HORCM\etc>pairdisplay -g VG01 Group PairVOL(L/R) (Port#,TID,LU-M) ,Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M VG01 oradb1(L) (CL1-A , 1, 2-0 )85000123 2.P-VOL PSUS,----- ---- - VG01 oradb1(R) (CL1-A , 1, 3-0 )85000123 3.S-VOL SSUS,----- ---- - VG01 oradb2(L) (CL1-A , 1, 2-1 )85000123 2.P-VOL PSUS,----- ---- - VG01 oradb2(R) (CL1-A , 1, 4-0 )85000123 4.S-VOL SSUS,----- ---- - VG01 oradb3(L) (CL1-A , 1, 2-2 )85000123 2.P-VOL PSUS,----- ---- - VG01 oradb3(R) (CL1-A , 1, 5-0 )85000123 5.S-VOL SSUS,----- ---- -

Note: When using the CTG, it is required to specify the -m grp option. However, the -split option and the -m grp option cannot be used at the same time. When collecting the SnapShot image using the CTG, split a pair after changing a pair status to RAIR with the paircreate command.

The SnapShot pair is created. Prelim

inary

Page 136: Copy on WriteSnapShotUsersGuide

124

8.4.3 Updating the V-VOL

To update the V-VOL:

1. For example, if the group name in the configuration definition file is VG01:

Change the PSUS status of the SnapShot pair to PAIR status using pairresync. Then, change the status to PSUS using pairsplit.

Example: C:\HORCM\etc>pairresync -g VG01 -d oradb1 C:\HORCM\etc>pairevtwait -g VG01 -s pair -t 300 10 pairevtwait : Wait status done. :\HORCM\etc>pairsplit -g VG01 -d oradb1

2. Execute pairdisplay to update the pair status and the configuration.

Example: C:\HORCM\etc>pairdisplay -g VG01 Group PairVOL(L/R) (Port#,TID,LU-M) ,Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M VG01 oradb1(L) (CL1-A , 1, 2-0 )85000123 2.P-VOL PSUS,----- ---- - VG01 oradb1(R) (CL1-A , 1, 3-0 )85000123 3.S-VOL SSUS,----- ---- - VG01 oradb2(L) (CL1-A , 1, 2-1 )85000123 2.SMPL ----,----- ---- - VG01 oradb2(R) (CL1-A , 1, 4-0 )85000123 4.SMPL ----,----- ---- - VG01 oradb3(L) (CL1-A , 1, 2-2 )85000123 2.SMPL ----,----- ---- - VG01 oradb3(R) (CL1-A , 1, 5-0 )85000123 5.SMPL ----,----- ---- -

The V-VOL was updated.

Prelim

inary

Page 137: Copy on WriteSnapShotUsersGuide

125

8.4.4 Restoring a V-VOL to the P-VOL

To restore the V-VOL to the P-VOL:

1. For example, if the group name in the configuration definition file is VG01:

Execute pairresync to restore the V-VOL to the P-VOL.

Example: C:\HORCM\etc>pairresync -restore -g VG01 -d oradb1 -c 15

2. Execute pairdisplay to restore the pair status and the configuration.

Example: C:\HORCM\etc>pairdisplay -g VG01 Group PairVOL(L/R) (Port#,TID,LU-M) ,Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M VG01 oradb1(L) (CL1-A , 1, 2-0 )85000123 2.P-VOL RCPY,----- ---- - VG01 oradb1(R) (CL1-A , 1, 3-0 )85000123 3.S-VOL RCPY,----- ---- - VG01 oradb2(L) (CL1-A , 1, 2-1 )85000123 2.SMPL ----,----- ---- - VG01 oradb2(R) (CL1-A , 1, 4-0 )85000123 4.SMPL ----,----- ---- - VG01 oradb3(L) (CL1-A , 1, 2-2 )85000123 2.SMPL ----,----- ---- - VG01 oradb3(R) (CL1-A , 1, 5-0 )85000123 5.SMPL ----,----- ---- -

3. Execute the pairsplit command. Replace pair status PAIR with PSUS.

Example: C:\HORCM\etc>pairsplit -g VG01 -d oradb1

V-VOL to P-VOL is restored.

Prelim

inary

Page 138: Copy on WriteSnapShotUsersGuide

126

8.4.5 Releasing SnapShot Pairs

To release the SnapShot pair and change the status to SMPL:

1. For example, if the group name in the configuration definition file is VG01:

Execute the pairdisplay command to verify that the status of the SnapShot pair is PSUS or PSUE.

Example: C:\HORCM\etc>pairdisplay -g VG01 Group PairVOL(L/R) (Port#,TID,LU-M) ,Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M VG01 oradb1(L) (CL1-A , 1, 2-0 )85000123 2.P-VOL PSUS,----- ---- - VG01 oradb1(R) (CL1-A , 1, 3-0 )85000123 3.S-VOL SSUS,----- ---- - VG01 oradb2(L) (CL1-A , 1, 2-1 )85000123 2.SMPL ----,----- ---- - VG01 oradb2(R) (CL1-A , 1, 4-0 )85000123 4.SMPL ----,----- ---- - VG01 oradb3(L) (CL1-A , 1, 2-2 )85000123 2.SMPL ----,----- ---- - VG01 oradb3(R) (CL1-A , 1, 5-0 )85000123 5.SMPL ----,----- ---- -

2. Execute the pairsplit -S command to release the SnapShot pair.

Example: C:\HORCM\etc>pairsplit -S -g VG01 -d oradb1

3. Execute the pairdisplay command to verify that the pair status changed to SMPL.

Example: C:\HORCM\etc>pairdisplay -g VG01 Group PairVOL(L/R) (Port#,TID,LU-M) ,Seq#,LDEV#.P/S,Status, Seq#,P-LDEV# M VG01 oradb1(L) (CL1-A , 1, 2-0 )85000123 2.SMPL ----,----- ---- - VG01 oradb1(R) (CL1-A , 1, 3-0 )85000123 3.SMPL ----,----- ---- - VG01 oradb2(L) (CL1-A , 1, 2-1 )85000123 2.SMPL ----,----- ---- - VG01 oradb2(R) (CL1-A , 1, 4-0 )85000123 4.SMPL ----,----- ---- - VG01 oradb3(L) (CL1-A , 1, 2-2 )85000123 2.SMPL ----,----- ---- - VG01 oradb3(R) (CL1-A , 1, 5-0 )85000123 5.SMPL ----,----- ---- -

The SnapShot pair is released.

Prelim

inary

Page 139: Copy on WriteSnapShotUsersGuide

127

8.5 Note about Confirm Pairs by Navigator 2

This section describe the note that you need to care when you confirm the pairs that is built by CCI using Navigator 2.

Group name and pair name

The Group name and the pair name defined on the configuration definition file are different idea from the group name and the pair name that is displayed by Navigator 2. The pair created by CCI is displayed unnamed pair by Navigator 2.

Group

The Group defined on the configuration definition file is different idea of the group (CTG) that is managed by the arrays. Even if the pairs are defined in a group on the configuration definition file when the CCI create the pairs, Navigator 2 displays them as “Ungroup” pairs. About the way to manage the group defined on the configuration definition file as a CTG, see the Hitachi Adaptable Modular Storage Command Control Interface (CCI) Reference Guide.

Prelim

inary

Page 140: Copy on WriteSnapShotUsersGuide

128

Prelim

inary

Page 141: Copy on WriteSnapShotUsersGuide

129

Chapter 9 System Monitoring and Maintenance

The following sections are included:

Monitoring of Pair Failure (see section 9.1)

Monitoring of Data Pool Usage (see section 9.2)

Prelim

inary

Page 142: Copy on WriteSnapShotUsersGuide

130

9.1 Monitoring of Pair Failure

In order to monitor whether SnapShot pairs operate correctly and the data is retained in V-VOLs, it is required to check pair status regularly. When a hardware failure occurs or the data pool is shortage, the pair status is changed to Failure and the V-VOL data is not retained. Check that the pair status is other than Failure. When the pair status is Failure, it is required to restore the status referring to the Chapter 10 Troubleshooting.

Results when pair failure occurs:

– For SnapShot, the following processes are executed when the pair failure occurs.

Table 9.1 Pair Failure Results

Management Software Results

A message is displayed in the event log. Navigator 2

The pair status is changed to Failure.

The pair status is changed to PSUE. CCI

An error message is output to the system log file. (For UNIX® system and the Windows® 2000 system, the syslog file and eventlog file are shown respectively.)

Informing a user when the pair failure occurs:

– When the pair status is changed to Failure, a trap is reported with SNMP Agent Support Function.

– When using CCI, the following message is output to the event log. For the details, refer to the manual, the Hitachi Adaptable Modular Storage Command Control Interface (CCI) Reference Guide.

Table 9.2 CCI System Log Message

Message ID Condition Cause

HORCM_102 The volume is suspended in code 0006. The pair status was suspended due to code 0006.

Monitoring of pair failure using a script

When SNMP Agent Support Function not used, it is necessary to monitor the pair failure by script that can be performed using Navigator 2 CLI commands. Refer to the following sample scripts below.

Example: A script for informing the user of the watching result in the case of Windows host Prel

imina

ry

Page 143: Copy on WriteSnapShotUsersGuide

131

The following is a script for monitoring the two pairs (SS_LU0001_LU0002 and SS_LU0003_LU0004) and informing the user when pair failure occurs. The following script is activated every several minutes. The disk array must be registered to beforehand.

echo OFF REM Specify the registered name of the arrays set UNITNAME=Array1 REM Specify the name of target group (Specify “Ungroup” if the pair doesn’t belong to any group) set G_NAME=Ungrouped REM Specify the name of target pair set P1_NAME=SS_LU0001_LU0002 set P2_NAME=SS_LU0003_LU0004 REM Specify the value to inform “Failure” set FAILURE=14 REM Checking the first pair :pair1 aureplicationmon -unit %UNITNAME% -evwait -ss -pairname %P1_NAME% -gname %G_NAME% -nowait if errorlevel %FAILURE% goto pair1_failure goto pair2 :pair1_failure <The procedure for informing a user>* REM Checking the second pair aureplicationmon -unit %UNITNAME% -evwait -ss -pairname %P2_NAME% -gname %G_NAME% -nowait if errorlevel %FAILURE% goto pair2_failure goto end :pair2_failure <The procedure for informing a user>* :end

Note: Describe the following processes in the procedure for informing a user of the pair status, as necessary.

E-Mail notification process

Screen display process

SNMP notification process

Event log notification process

Prelim

inary

Page 144: Copy on WriteSnapShotUsersGuide

132

9.2 Monitoring of Data Pool Usage

When a data pool is shortage, the status of all pairs using the data pool is changed to Failure and the V-VOL data cannot be retained because the differential data cannot be saved. In order to prevent the data pool from getting shortage and the pair statuses from changing to Failure, it is required to monitor the used data pool capacity. Even when the hardware maintenance contract has been made (including the term of guarantee without charge), a user must monitor the data pool capacity so that it does not run short because a user must monitor data pool capacity.

When you perceive the risk of the data pool shortage, expand the data pool capacity referring to the Chapter 10 Troubleshooting or secure a free capacity of the data pool by releasing V-VOL pairs that are not required to retain the data.

Method for informing a user that the threshold value of the used data pool capacity is exceeded:

– In order to notify of the risk of the data pool shortage in advance, SnapShot changes the status of pairs that are using the data pool to Threshold over and a trap is reported with SNMP Agent Support Function.

– You can get the pair status as a returned value using the “pairvolchk -ss” command of CCI. When the status is PFUS, the returned value is 28. (When the LU is specified, the values for the P-VOL and V-VOL are 28 and 38 respectively.) For the details of the pairvolchk command, refer to the manual, the Hitachi Adaptable Modular Storage Command Control Interface (CCI) Reference Guide.

– Monitoring on the used data pool capacity is necessary for each data pool.

– The capacity of the data pool being used (rate of use) can be referred to through CCI or Navigator 2. It is recommended not only to monitor the data pool threshold value but also to monitor and manage the hourly transition of the used capacity of the data pool. For details of procedure for referring to the rate of data pool capacity used, refer to the manual, the Hitachi Adaptable Modular Storage Command Control Interface (CCI) Reference Guide.

Prelim

inary

Page 145: Copy on WriteSnapShotUsersGuide

133

Monitoring the data pool threshold over by script

When SNMP Agent Support Function not used, it is necessary to monitor the data pool threshold over by script that can be performed using Navigator 2 CLI commands. Refer to the following sample scripts below.

Example: A script for informing the user of the watching result in the case of Windows host

The following is a script for monitoring the two pairs (SS_LU0001_LU0002 and SS_LU0003_LU0004) and informing the user when data pool threshold over occurs. The following script is activated every several minutes. The disk array must be registered beforehand.

echo OFF REM Specify the registered name of the arrays set UNITNAME=Array1 REM Specify the name of target group (Specify “Ungroup” if the pair doesn’t belong to any group) set G_NAME=Ungrouped REM Specify the name of target pair set P1_NAME=SS_LU0001_LU0002 set P2_NAME=SS_LU0003_LU0004 REM Specify the value to inform “Threshold over” set THRESHOLDOVER=15 REM Checking the first pair :pair1 aureplicationmon -unit %UNITNAME% -evwait -ss -pairname %P1_NAME% -gname %G_NAME% -nowait if errorlevel %THRESHOLDOVER% goto pair1_thresholdover goto pair2 :pair1_thresholdover <The procedure for informing a user>* REM Checking the second pair aureplicationmon -unit %UNITNAME% -evwait -ss -pairname %P2_NAME% -gname %G_NAME% -nowait if errorlevel %THRESHOLDOVER% goto pair2_thresholdover goto end :pair2_thresholdover <The procedure for informing a user>* :end

Note: Describe the following processes in the procedure for informing a user of the pair status, as necessary.

E-Mail notification process

Screen display process

SNMP notification process

Event log notification process

A process to add the LU(s) to the data pool

Prelim

inary

Page 146: Copy on WriteSnapShotUsersGuide

134

Prelim

inary

Page 147: Copy on WriteSnapShotUsersGuide

135

Chapter 10 Troubleshooting

The following sections are included:

Troubleshooting (see section 10.1)

Prelim

inary

Page 148: Copy on WriteSnapShotUsersGuide

136

10.1 Troubleshooting

In the case of SnapShot, an operation to restore the pair status must be performed when a pair failure occurs or used data pool capacity exceeds the threshold value. There are three factors that cause the pair trouble: one is a hardware failure such as a multiple drive failure, the other is the data pool shortage, and the DP pool capacity is depleted. The procedure for restoring the pair status differs in each case.

When a pair failure occurs due to a hardware failure, the maintenance work of the array must be done first. In the maintenance work, SnapShot pair operation may be required. Even if the maintenance personnel maintain the array, since SnapShot pair operation is performed by a user, please cooperate with the service personnel in the maintenance work.

10.1.1 Pair Failure

When a pair failure occurs while SnapShot pair is being operated, it must be determined first whether the failure is caused by a hardware failure or the data pool shortage. Using Navigator 2, confirm the message displayed in Event Log tab in Alert & Events window. Check the status of the data pool used by pairs whose status has been changed to Failure. When the massage “I6D000 Data pool does not have free space (Data pool-xx)” (xx is the number of the data pool) is displayed, the pair failure is considered to have occurred due to shortage of the data pool. When the status of the data pool is other than, the pair failure is considered to have occurred because of a hardware failure or the DP pool capacity is depleted.

When the data pool is shortage, release all pairs that are using the shortage data pool among pairs whose status is Failure. The data pool shortage considered to have occurred because of a problem of the system configuration. Review the configuration including the data pool capacity and the number of V-VOLs after deleting the pairs. Execute the operation to restore the status of the SnapShot pair after reviewing the configuration . All operations for the restoration in the case where a pair failure has occurred due to the data pool shortage must be performed by a user.

When a pair failure occurs because of a hardware failure, maintain the disk array first. Recover the pair from the failure by a pair operation after the failure of the array has been removed. Besides, a pair operation may be necessary for the works for the maintenance work of the array. For example, when a formatting of an LU where a failure occurred is required and the LU is a SnapShot P-VOL, the formatting must be done after the pair is released. Even if the maintenance personnel maintain the array, the work by the service personnel is limited to the failure recovery and the operation to restore the status of a SnapShot pair is performed by a user.

To restore the status of the SnapShot pair, create the pair again after releasing the pair. Figure 10.1 shows a workflow to be done when a pair failure occurs from determining the factor to restore of the pair status by pair operation. Table 10.1 shows the work responsibility schedule for the service personnel and a user. Prel

imina

ry

Page 149: Copy on WriteSnapShotUsersGuide

137

Figure 10.1 Pair Status Information Example Using SnapShot

Start

Status change to Failure

Create the pair*.

End

Split the pair.

*: In case of the P-VOL failure, restore the backup data into P-VOL and execute pair create operation.

NO

YES

Is the Failure status caused by an insufficient the data pool capacity?

Refer to Event Log message by Navigator 2.

Split all the pairs in the Failure status.

Check the pool capacity and the number of V-VOLs.

When the message code I6D000 is displayed, a shortage of a pool capacity has occurred.

Maintain the arrays to remove hardware failure. See Troubleshooting chapter in Adaptable Modular Storage User’s Guide.

Is a DP-VOL used for the operation target pair?

It is required to check the capacity of the DP pool to which

the DP-VOL belongs. Refer to the section 10.1.3.

YES

NO

Prelim

inary

Page 150: Copy on WriteSnapShotUsersGuide

138

Table 10.1 Operational Notes for SnapShot Operations

Action Action Taken By

Monitoring pair failure. User

Confirm the Event Log message using Navigator 2 (Confirming the data pool).

User

Verify the status of the array. User

Call maintenance personnel when the array malfunctions. User

For other reasons, call the Hitachi Support Center. User (only for users that are registered in order to receive a support)

Split the pair. User

Hardware maintenance. Hitachi Customer Service

Reconfigure and recover the pair. User

In addition, check the pair status immediately before the occurrence of the pair failure. In the case where the failure occurs when the pair status is Reverse Synchronizing (during restoration from a V-VOL to a P-VOL), the coverage of the data assurance and the detailed procedure for restoring the pair status differ from a case where a failure occurs when the pair status is other than Reverse Synchronizing. Table 10.2 shows the data assurance and the procedure for restoring the pair when a pair failure occurs.

When the pair status is Reverse Synchronizing, data copying for the restoration is being done in the background. Therefore, when the restoration is performed normally, a host recognizes P-VOL data as if it were replaced with V-VOL data from immediately after the start of the restoration, but when a pair failure has occurred, it is impossible to make the host recognize the P-VOL as if it were replaced with the V-VOL and the P-VOL data becomes invalid because copying to the P-VOL is not completed. Therefore, be careful of the matter described above.

Table 10.2 Data Assurance and the Method for Recovering the Pair

State before Failure Data Assurance Action Taken after Failure

Other than Reverse Synchronizing

P-VOL: Assured V-VOL: Not assured

Split the pair, and then create a pair again. Even if the P-VOL data is assured, there may be a case where a pair has already been released because a failure such as a multiple drive blockade has occurred in an LU that configures a data pool being used by the pair. In such case, confirm that the data exists in the P-VOL, and then create a pair. Incidentally, the V-VOL data generated is not the one invalidated previously but the P-VOL data at the time when the pair was newly created.

Reverse Synchronizing P-VOL: Not assured V-VOL: Not assured

Split the pair, restore the backup data to P-VOL, and then create a pair. There may be a case where a pair has already been released because a failure such as a double drive failure has occurred in an LU that configures a P-VOL or a data pool. In such case, confirm that the backup data restoration has been completed to the P-VOL, and then create a pair. Incidentally, the V-VOL data generated is not the one invalidated previously but the P-VOL data at the time when the pair was newly created. Prelim

inary

Page 151: Copy on WriteSnapShotUsersGuide

139

10.1.2 Data Pool Capacity Exceeds Threshold Value

When a data pool capacity that is used exceeds the threshold value, the status of pairs using the data pool becomes Threshold over. Even when the pair status is changed to Threshold over, the pairs operate as they are in the Split status, but it is necessary to secure the data pool capacity early because it is highly possible that the data pool is exhausted. The operation to secure the data pool capacity is performed by a user. To secure the data pool capacity, release the pairs that are using the data pool or expand the data pool capacity.

When releasing a pair, back up the V-VOL data to a tape device, if necessary, before releasing the pair because the data of the V-VOL of the pair to be released becomes invalid.

To expand the data pool capacity (refer to section 4.5.2), add one or more LUs to the data pool. However, be careful when 64 LUs have already been set for the data pool whose capacity is to be expanded or the number of LUs set for the data pools of the whole disk array has already reached 128 because no LU can be added to the data pool.

10.1.3 Cases and Solutions Using the DP-VOLs

When configuring a SnapShot pair using the DP-VOL as a pair target LU, the SnapShot pair status may become Failure depending on the combination of the pair status and the DP pool status shown in Table 10.3. Perform the recovery method shown in Table 10.3 for all the DP pools to which the P-VOLs and the data pools where the pair failures have occurred belong.

Table 10.3 Cases and Solutions Using the DP-VOLs

Pair Status DP Pool Status Cases Solutions

Formatting Although the DP pool capacity is being added, the format progress is slow and the required area cannot be allocated.

Wait until the formatting of the DP pool for total capacity of the DP-VOLs created in the DP pool is completed.

Split Reverse Synchronizing

Capacity Depleted The DP pool capacity is depleted and the required area cannot be allocated.

For making the DP pool status normal, perform the DP pool capacity growing and DP pool optimization, and increase the DP pool free capacity.

Prelim

inary

Page 152: Copy on WriteSnapShotUsersGuide

140

Prelim

inary

Page 153: Copy on WriteSnapShotUsersGuide

141

Appendix A SnapShot Specifications

A.1 External Specifications

Table A.1 lists and describes the external specifications for SnapShot.

Table A.1 External Specifications

Item Adaptable Modular Storage

Applicable model AMS2100/AMS2300/AMS2500 (For dual configuration only.)

Host interface AMS2100/AMS2300: Fibre or iSCSI

AMS2500: Fibre or iSCSI

Number of pairs AMS2300/2500: 2,046 (maximum) AMS2100: 1,022 (maximum) When one P-VOL pairs with thirty-two V-VOLs, the number of pairs is thirty-two.

Cache memory AMS2500: 2, 4, 6, 8,10,12,16 GB/CTL AMS2300: 2, 4, 8 GB/CTL AMS2100: 2, 4 GB/CTL

Command devices When operating a pair by using CCI, the command device must be set. Up to 128 per array can be set. The command device LU size must be greater than or equal to 33 MB.

Unit of pair management LUs are the target of SnapShot pairs, and are managed per logical unit.

Pair structure (number of V-VOLs per P-VOL)

1:32

RAID level RAID 1+0 (2D+2D to 8D+8D), RAID 5 (2D+1P to 15D+1P), RAID 6 (2D+2P to 28D+2P), RAID 1 (1D+1D)

Combination of RAID levels All combinations supported between P-VOL and LU for data pool. The number of data disks does not have to be the same.

LU size The LU size of the P-VOL must be equal to the V-VOL.

Types of the drive for a P-VOL/data pool

If the drive types are supported by the array, they can be set for the P-VOL and a data pool. However, it is recommended to assign an LU consisting of the SAS drives, SAS7.2K drives, or SSD drives to a P-VOL and a data pool. When creating a pair with the LUs configured in the SATA drive, the use conditions of the SATA drive may be different.

Consistency Group (CTG) number Max 256/array AMS2300/2500: 2,046 pairs/CTG (maximum) AMS2100: 1,022 pairs/CTG (maximum)

Using a data pool It is required to restart the array to ensure the data pool resource. Prelim

inary

Page 154: Copy on WriteSnapShotUsersGuide

142

Item Adaptable Modular Storage

Data pool Max 64/array (Data pool number is 0 to 63) Up to 64 LUs can be set for one data pool. Up to 128 LUs per disk array can be set for data pool. When the array firmware version is less than 0852/A, unified LU can’t be set for data pool. The normal LU and the DP-VOL cannot coexist in the same data pool. When you specify the DP-VOL as a data pool, a data pool usage does not equal to a data pool usage.

Max supported capacity of P-VOL and data pool

The supported capacity of SnapShot has restriction. For details, see section 3.3.

Access to the LUs for data pool from a host

The LUs for data pool is not recognizable from the host.

Expansion of data pool capacity Possible. The capacity is expanded through an addition of LUs to the data pool. The extension of a data pool can be made while a pair that uses the data pool is formed. However, the LUs created in the RAID group with different drive types cannot be mixed.

Reduction of data pool capacity Possible only when all the pairs that use the data pool have been released.

Unification/growing/shrinking of the LU which is set for data pool

Not possible.

Formatting/deleting/growing/shrinking of LUs in a pair

Deleing RAID group in a pair

Not possible.

Pairing with the unified LU When the array firmware version is less than 0852/A, the capacity of each LU before the unification must be 1 GB or larger.

Formatting/ LU unification regarding the V-VOL

Not possible.

Deletion of the V-VOL Possible only when any pairs don’t consist of the V-VOL.

Interchange between the P-VOL and V-VOL

Not possible.

Restriction during RAID group expansion

The volume (P-VOL or V-VOL) during the RAID group expansion cannot operate a pair. You can expand the RAID group to which the volume belongs only when the pair status of target volume is Simplex or Paired.

Mixing SnapShot and non-SnapShot Mixing LUs (P-VOL, data pool, and V-VOL) of SnapShot and LUs of non-SnapShot are available within the AMS array. However, note that there may be some effects to the performance. The performance decreases when pairs operation is in progress (even if the LUs are non-SnapShot).

Load balancing The P-VOL and LU set in the data pool are out of the target of load balancing.

Concurrent use of ShadowImage SnapShot and ShadowImage can be used together at the same time, but cascade between SnapShot and ShadowImage is not supported. The number of CTG at the time of using SnapShot and ShadowImage together is limited to the maximum of 256 combining that of SnapShot and ShadowImage.

Concurrent use of unified LU Available.

Concurrent use of LUN Manager Available. Prel

imina

ry

Page 155: Copy on WriteSnapShotUsersGuide

143

Item Adaptable Modular Storage

Concurrent use of Password Protection

Available.

Concurrent use of Volume Migration Available. However, a P-VOL, an S-VOL, and reserved LU of Volume Migration cannot be specified as a P-VOL of SnapShot.

Concurrent use of SNMP Agent Support Function

Available.

When the pair status is changed to Failure or Threshold Over, a trap is reported.

Concurrent use of Cache Residency Manager

Available. However, the LU specified for Cache Residency (LU cache residence) cannot be set to P-VOL, V-VOL, or LU for data pool.

Concurrent use of Cache Partition Manager

Available. Cache partition information is initialized when SnapShot is installed in the status where Cache Partition Manager is already in use. For details, see Appendix B. When using SnapShot with Cache Partition Manager, the segment size of the LU belonging to data pool must be the default size (16 kB) or less.

Concurrent use of SNMP Agent Available. Traps are sent following accident occurs. • Pair status changes to Threshold Over • Pair status changes to Failure

Concurrent use of Data Retention Utility

Available. When the S-VOL Disable is set for an LU, a pair formation using the LU as a V-VOL is suppressed. A setting of the S-VOL Disable of a volume that has already become a V-VOL is not suppressed only when the pair status is split. Besides, when the S-VOL Disable is set for a P-VOL, restoration of SnapShot is suppressed.

Concurrent use of Power Saving Available. However, when a P-VOL is included in a RAID group, for which the Power Saving has been specified, no pair operation can be performed except the pair split and the pair release.

Concurrent use of TrueCopy TrueCopy can be cascaded with SnapShot. For details, see section 2.5.

Concurrent use of TCE TCE can be cascaded with SnapShot P-VOL. For details, see section 2.6.

Concurrent use of Dynamic Provisioning

Available. For details, see section 4.3.14.

License SnapShot becomes usable through entry the key code.

Potential effect caused by installation of the SnapShot function

Reboot is required to acquire data pool resources.

Differential Management LU (DMLU) One or two (One of them is for mirroring) The Differential Management LU size must be greater than or equal to 10 GB. It is recommended that two Differential Management LUs are set according to following conditions. • To be created in different RAID group • To be allocated in different controllers

Potential effect caused by installation of the SnapShot function

Reboot is required to acquire data pool resources.

Action to be taken when the limit of usable data pool capacity is exceeded

When a percentage of the data pool capacity being used reach 100%, statuses of all the V-VOLs that use the data pool become failure.

Prelim

inary

Page 156: Copy on WriteSnapShotUsersGuide

144

Item Adaptable Modular Storage

Potential effect caused by a P-VOL failure

V-VOL data also exists in the P-VOL, therefore P-VOL failure results in a V-VOL failure also.

Reduction of the memory The memory cannot be reduced when the ShadowImage, SnapShot, TrueCopy, or TCE function is validated. Make the reduction after invalidating the function.

Prelim

inary

Page 157: Copy on WriteSnapShotUsersGuide

145

Appendix B Installing SnapShot when Cache Partition Manager is Being Used

SnapShot uses a part of the cache area to manage the internal resources. When this happens, the cache capacity that Cache Partition Manager can use therefore decreases.

Ensure that the cache partition information is initialized as shown in Figure B.1 and Figure B.2 when SnapShot is installed and Cache Partition Manager is already in use.

All the logical units are moved to the master partitions on the side of the default owner controller.

All sub-partitions are deleted and the size of the each master partition is reduced to half of the user data area after the installing SnapShot.

Figure B.1 When Cache Partition Manager is Used

Controller 0

Controller 1

Logical unit to which the partition#0 belongs

System area

System area Partition#0, #2Mirroring

Partition#1 Mirroring

Master Partition#0

Master Partition#1

Partition#2

Logical unit to which the partition#2 belongs

Logical unit to which the partition#1 belongs

Prelim

inary

Page 158: Copy on WriteSnapShotUsersGuide

146

Figure B.2 Where SnapShot is Installed while Cache Partition Manager is Used

Controller 0

Controller 1

Logical unit to which the partition#0 belongs

System area

System area Partition#0 Mirroring

Partition#1 Mirroring

Master Partition#0

Logical unit to which the partition#1 belongs

Master Partition#1

Prelim

inary

Page 159: Copy on WriteSnapShotUsersGuide

147

Index C cascade connection of SnapShot with TCE, 25 cascade connection of SnapShot with TrueCopy,

20 cascade restrictions with data pool of

SnapShot, 24 cascade restrictions with P-VOL of SnapShot, 21 cascade restrictions with P-VOL of TCE, 26 cascade restrictions with S-VOL of TCE, 27 cascade restrictions with V-VOL of SnapShot, 22 CLI

disabling, 99 enabling, 99 installing, 96 uninstalling, 98

configuration definition file HORCM_MON, 117

creating pairs, 13

D data assurance, 138 deleting pairs, 17 disabling (CLI), 99

E enabling (CLI), 99

H HORCM_MON, 117

I installing (CLI), 96

L LU, 30, 98, 99

P pair status

Simplex, 19 Split, 19

pair status (CCI), 122 Pairs Failures, 16

R restoration pairs, 14 restrictions configuration on the cascade of

TrueCopy with SnapShot, 24

S Simplex, 19 SnapShot

components, 5 disabling (CLI), 99 enabling (CLI), 99 installing (CLI), 96 preparation, 96 uninstalling (CLI), 98

Split, 19 status (CCI), 122

U uninstalling (CLI), 98 updating V-VOL, 13

Prelim

inary

Page 160: Copy on WriteSnapShotUsersGuide

MK-97MDF8124-00

Prelim

inary