emc srdf/a and srdf/a multi-session consistency on z · pdf fileemc srdf/a and srdf/a...

416

Click here to load reader

Upload: trinhque

Post on 07-Mar-2018

302 views

Category:

Documents


12 download

TRANSCRIPT

Page 1: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Version 1.5

• Planning an SRDF/A and SRDF/A Multi-Session Consistency Replication Installation

• Implementation and Basic Operations of SRDF/A

• SRDF/A and SRDF/A Multi-Session Consistency Return Home Procedures

Mike AdamsSteve HaydonDebbie McCartyTony MungalMogens Pedersen

Page 2: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS TechBook2

Copyright © 2007, 2008, 2009, 2010 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

Part number H4118.5

Page 3: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Contents

Preface

Chapter 1 Background and IntroductionOverview............................................................................................ 22Essential terms and concepts .......................................................... 23

Dependent-write I/O.................................................................23Dependent-write consistency ...................................................23Point of consistency....................................................................23Restore ......................................................................................... 24

Disaster Recovery compared to Disaster Restart ......................... 25Disaster Recovery .......................................................................25Disaster Restart ...........................................................................25

Design considerations for Disaster Recovery and Disaster Restart ................................................................................................. 26

Recovery Point Objective (RPO)...............................................26Recovery Time Objective (RTO) ...............................................27Operational complexity .............................................................27Primary server activity...............................................................28Production impact .....................................................................28Secondary server activity ..........................................................29Number of copies of data ..........................................................29Distance for solution ..................................................................29Bandwidth requirements...........................................................29Federated consistency ................................................................30Testing the solution ....................................................................30Cost .............................................................................................. 31

Historical overview of consistency technology............................ 32SRDF/S Consistency Groups....................................................32TimeFinder consistent split .......................................................36

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 3

Page 4: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Contents

SRDF/AR single hop ................................................................. 38SRDF/AR multi-hop.................................................................. 40SRDF/A and SRDF/A MSC ..................................................... 44

Chapter 2 EMC Foundation ProductsEMC Symmetrix VMAX Series with Enginuity ........................... 52

Design overview........................................................................ 52Available models ....................................................................... 53Architecture details ................................................................... 54Enginuity .................................................................................... 55Summary..................................................................................... 59

Symmetrix DMX hardware and EMC Enginuity features.......... 60EMC Enginuity operating environment ................................. 61Symmetrix DMX ......................................................................... 63

ResourcePak Base for z/OS............................................................. 66Features ........................................................................................ 67

SRDF family of products for z/OS................................................. 70SRDF Host Component for z/OS............................................. 71SRDF mainframe features ......................................................... 71Concurrent SRDF and SRDF/Star ........................................... 72Cascaded SRDF........................................................................... 74Multi-Session Consistency ........................................................ 74SRDF/AR..................................................................................... 74EMC Geographically Dispersed Disaster Restart (EMC GDDR).......................................................................................... 75EMC Consistency Group for z/OS .......................................... 76Restart in the event of a disaster or nondisaster .................... 81SRDF/A Automated Recovery................................................. 81EMC AutoSwap .......................................................................... 83

TimeFinder family products for z/OS........................................... 86TimeFinder/Clone for z/OS..................................................... 86TimeFinder/Snap for z/OS ...................................................... 87TimeFinder/Mirror for z/OS ................................................... 88TimeFinder/CG.......................................................................... 89

Chapter 3 Understanding SRDF/A and SRDF/A MSC ConsistencyOverview............................................................................................ 92

SRDF/A (Asynchronous) operations ...................................... 92SRDF/A history ............................................................................... 95

Enginuity 5670/SRDF/A single session ................................. 95Enginuity 5670.50 and later/SRDF/A (MSC) ........................ 95

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS4

Page 5: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Contents

Enginuity 5X71 ............................................................................95Enginuity 5772.............................................................................98Enginuity 5772.79.71...................................................................99Enginuity 5773.............................................................................99Enginuity 5874...........................................................................101Enginuity 5874 Q4'09 Service Release....................................104Enginuity 5874.180....................................................................104Enginuity 5875...........................................................................104

Tolerance mode................................................................................ 106SRDF/A single session mode point in time ................................ 108SRDF/A single session mode states............................................. 110

Not Ready/Target Not Ready state—system startup .........110Inactive state ..............................................................................110Active state.................................................................................111

SRDF/A single session mode delta set switching...................... 112SRDF/A single session mode state transitions........................... 119

Switching to SRDF/A mode....................................................119Switching to SRDF/S mode from SRDF/A single session mode..............................................................................122Coming out of the SRDF/A active state ................................123

SRDF/A single session cleanup process...................................... 126SRDF/A single session mode recovery scenarios ...................... 127

Temporary link loss ..................................................................127Non-temporary link loss ..........................................................127

SRDF/A Reserve Capacity enhancement: Transmit Idle .......... 129Transmit Idle overview............................................................129Enginuity functionality ............................................................129Usage considerations................................................................133Testing considerations..............................................................133Host Component interface to SRDF/A Transmit Idle.........134

SRDF/A Reserve Capacity enhancement: Delta Set Extension .......................................................................................... 137

SRDF/A Delta Set Extension overview .................................137DSE theory of operation...........................................................138DSE interactions with other features .....................................143Failback from secondary Symmetrix system devices ..........144

SRDF/A Multi-Session Consistency (MSC) mode..................... 146SRDF/A MSC mode dependent-write consistency ................... 147

Entering SRDF/A Multi-Session Consistency......................148Performing an SRDF/A MSC consistent cycle switch ........150

SRDF/A MSC mode delta set switching ..................................... 151SRDF/A MSC session cleanup process ...................................... 162Using TimeFinder to create a restartable copy............................ 164

5EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 6: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Contents

Creating local restartable copies — primary site ................. 164Create remote restartable copies — secondary site ............. 165

Establish and using SRDF/A with Cascaded SRDF.................. 167Overview and introduction .................................................... 167Revised SRDF relationships for Cascaded SRDF................. 168Supported SRDF modes and general restrictions................ 169Limitations and restrictions .................................................... 172Initial best practices for Cascaded SRDF .............................. 173Changes to the Host Component z/OS interface ................ 174Cascaded SRDF/Star support for z/OS................................ 175

SRDF/A with SRDF/Extended Distance Protection................. 178Requirements and dependencies ........................................... 179Current limitations and restrictions....................................... 179

Mainframe Enabler 7.0 (SRDF Host Component 7.0) changes............................................................................................. 181Using SRDF/A write pacing......................................................... 183

SRDF/A group pacing............................................................. 184SRDF/A device pacin .............................................................. 185

Chapter 4 Planning an SRDF/A or SRDF/A MSC Replication InstallationIntroduction..................................................................................... 188Functional comparison of common SRDF solutions ................. 189

SRDF/S (Synchronous) mode functionality review............ 189SRDF/AR (Automated Replication) functionality review......................................................................................... 191Difference between synchronous and asynchronous ......... 193

Performance comparisons SRDF/S and SRDF/A host applications...................................................................................... 195Asynchronous replication: The major consideration ............... 197

Fundamental SRDF/A variables............................................ 198Peak time.......................................................................................... 200

Activity level and duration of peak time .............................. 200Locality of reference/write folding.............................................. 204

Symmetrix cache locality of reference .................................. 205SRDF link locality of reference ............................................... 206

Link bandwidth .............................................................................. 209Link bandwidth estimates....................................................... 210Bandwidth burst exception..................................................... 213Determining number of SRDF remote adapters .................. 214

Cache calculation ............................................................................ 216Cycle time and size calculation .............................................. 216

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS6

Page 7: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Contents

Cache sizing example ...............................................................216Balancing SRDF/A configurations............................................... 218

Symmetrix cache management ...............................................218Unbalanced SRDF/A configurations.....................................221Balanced SRDF/A configurations ..........................................222Options to resolve configuration balance issues ..................223Balanced configuration summary ..........................................223

Network considerations ................................................................. 224Effect of distance on throughput ............................................224Response time............................................................................225Planning for SRDF/A Delta Set Extension............................227DSE paging performance considerations ..............................228Sizing configuration additions required for DSE.................230Planning the delta set save device configuration ................231Estimating RPO impact ............................................................235DSE sizing example ..................................................................237RPO while returning to normal RPO.....................................240Additional DSE restrictions.....................................................242

Analysis tools................................................................................... 245STP Navigator/WLA Performance Manager .......................245

EMC SRDF/A planning and design service .............................. 248Overview................................................................................... 248Applicability ..............................................................................248Service positioning....................................................................249Project scope ............................................................................. 250Scope exclusions........................................................................250

Chapter 5 Implementation of SRDF/ASRDF/A pre-implementation considerations............................. 254SRDF/A additional considerations .............................................. 256

Determine the recovery system environment ....................256SRDF/A link bandwidth .........................................................256MSC high-availability support................................................256Gatekeepers................................................................................257SRDF/A configuration.............................................................260

Software requirements and customization.................................. 261Technical requirements and limitations ................................261Symmetrix Control Facility (SCF) authorization codes.......262Customization of the initialization parameters ....................262SCF DD DUMMY......................................................................265SUB=MSTER..............................................................................266

7EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 8: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Contents

ResourcePak Base (SCF) and SRDF/A startup procedures ................................................................................. 266

SRDF/A configuration overview................................................. 268RDF groups and sharing of the RDF directors..................... 270Creation of SRDF/A R1/R2 pairs .......................................... 272R2 BCV initial device pairing—initial (full) synchronization ........................................................................ 272Dynamic SRDF......................................................................... 273Creation of the RDFGRPS for the SRDF/A sessions........... 274Starting SRDF/A ...................................................................... 282Activate Transmit Idle ............................................................. 284

DSE pool definition ........................................................................ 287Activate DSE ............................................................................. 290

Establishing a Cascaded SRDF configuration ............................ 296Process overview ...................................................................... 296Cascaded replication example................................................ 296

Chapter 6 Basic SRDF/A OperationsResuming SRDF/A after normal termination or temporary link failure........................................................................................ 308

Recovery procedure from a PEND_DROP action ............... 308Recovery process to reactivate SRDF/A after a PENDDROP .............................................................................. 311Split off the BCVs to save a gold copy................................... 311Recovery process from a Link Failure (all the links fail) .... 319

All links are lost .............................................................................. 321Interrogate SCF to determine the status of MSC ................. 321The links are now recovered................................................... 323

Perform BCV split of the R2s and BCVs on the R2 side............ 326Set the SRDF/A volumes to ADCOPY-DISK mode............ 326

Reestablish BCVs ........................................................................... 334

Chapter 7 SRDF/A and SRDF/A MSC Return Home ProceduresRestart in the event of disaster or abnormal termination (not a disaster) ................................................................................. 336

Disaster...................................................................................... 336Abnormal termination (not a disaster).................................. 337

Activation of secondary R2 volumes for production processing ........................................................................................ 338

Display the volumes ................................................................ 339Query the sessions to validate the drop................................ 340

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS8

Page 9: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Contents

Display the R2 volumes ...........................................................343Take the SRDF links offline .....................................................343Enable secondary (R2) volumes to the operational host .....344Make the secondary (R2) devices R/W ................................344Display the state of volumes .................................................345Vary devices online ..................................................................346

Return Home overview.................................................................. 347Outline of recovery ...................................................................347Pre-refresh SRDF pair...............................................................348Vary the primary (R1) volumes offline to the z/OS systems .......................................................................................348Display the current settings.....................................................349Set the SYNCH_DIRECTION at the primary host...............350Set the SYNCH_DIRECTION at the secondary host ...........350Reestablish and split Gold Copy BCVs at the R2 Target site ...............................................................................................352Refresh and RFR-RSUM SRDF pair .......................................352Reestablish and split Gold Copy BCVs at the R2 Target site ...............................................................................................353Stop SRDF and set SYNCH_DIRECTION to R1>R2............353Begin SRDF/A and MSC processing .....................................355

Glossary

Index

9EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 10: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Contents

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS10

Page 11: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Title Page

Figures

1 Rolling disaster with multiple production Symmetrix systems.............. 332 Rolling disaster with SRDF consistency group protection ...................... 353 TimeFinder consistent split process flow ................................................... 374 SRDF/AR single hop replication process flow.......................................... 395 SRDF/AR multi-hop replication process flow .......................................... 426 SRDF asynchronous replication process flow............................................ 447 EMC Symmetrix VMAX Series with Enginuity......................................... 538 Symmetrix hardware and software layers ................................................. 609 Symmetrix DMX logical diagram ................................................................ 6210 z/OS SymmAPI architecture........................................................................ 6611 SRDF family for z/OS ................................................................................... 7112 Classic SRDF/Star support configuration ................................................. 7313 SRDF consistency group using RDF-ECA.................................................. 7914 AutoSwap before and after states ................................................................ 8415 TimeFinder family of products for z/OS.................................................... 8616 SRDF/A delta sets and their relationships................................................. 9617 SRDF/A delta sets and their relationships............................................... 10818 SRDF/A single session allowed state transitions.................................... 11019 Capture delta set collects application writes............................................ 11220 Transmit delta set empties .......................................................................... 11321 Transfer is halted prior to primary Symmetrix system cycle switch.... 11322 Primary Symmetrix system delta set switch ............................................ 11423 New capture delta available for host writes ............................................ 11424 Secondary Symmetrix system waits for apply delta set to be

restored ........................................................................................................... 11525 Secondary Symmetrix system delta set switch ........................................ 11526 Secondary Symmetrix system new receive delta set available for

SRDF................................................................................................................ 11627 Secondary Symmetrix system begins restore of apply delta set ........... 11728 Primary Symmetrix system begins SRDF transfer .................................. 118

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 11

Page 12: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Figures

29 SRDF/A single session transition path..................................................... 11930 SRDF/A delta set architecture ................................................................... 12931 SQ SRDF/A display (primary side) showing Transmit Idle is ON...... 13632 SQ SRDF/A display (primary side) showing Transmit Idle is

ACTIVE .......................................................................................................... 13633 SRDF/A MSC delta sets and their relationships..................................... 14834 SRDF/A MSC allowed state transitions ................................................... 14935 MSC capture delta set collects application writes................................... 15236 MSC primary Symmetrix system transmit delta set cycle is emptied.. 15337 MSC primary Symmetrix system halts the SRDF transfer .................... 15438 MSC secondary apply delta set restore complete ................................... 15539 MSC primary Symmetrix system cycle switch/writes are deferred .... 15640 Writes are released/new capture delta set accepts host writes ............ 15741 MSC secondary Symmetrix system cycle switch .................................... 15842 MSC secondary new receive delta set is available .................................. 15943 MSC primary Symmetrix systems begin SRDF transfer ....................... 16044 Secondary Symmetrix systems begin the apply delta set restore ......... 16145 Cascaded SRDF architecture ...................................................................... 16746 Query or control references for hop-2 devices are based on the

workload location .......................................................................................... 16947 Basic Cascaded SRDF configuration ......................................................... 17048 Cascaded SRDF mode combination diagram .......................................... 17149 Cascaded SRDF/Star configuration under normal operation .............. 17750 SRDF/S on process flow ............................................................................. 19151 SRDF/AR single hop replication............................................................... 19252 SRDF/AR multi-hop replication ............................................................... 19253 SRDF/A replication steps ........................................................................... 19554 SRDF/S replication steps ........................................................................... 19655 Inflow and outflow of writes are required to be equal on average ...... 19756 SRDF/AR alternative solution .................................................................. 19857 Typical write workload and average workload ...................................... 20058 Peak time ....................................................................................................... 20259 Peak workload depends on collection interval........................................ 20360 Physical disk locality of reference sample................................................ 20561 Synchronous and asynchronous block transfer comparison ................ 20762 SRDF link locality of reference sample ..................................................... 20863 Peak bandwidth depends on the collection interval............................... 21064 STP Navigator “Kbytes written per second” metric example............... 21165 ET tool remote adapter analysis results.................................................... 21566 Elongated restore (N-2) cycle may impact other cycles ......................... 21967 RPO as a function of time during SRDF/A throughput imbalance ... 24168 Example Performance Manager/STP Navigator output ....................... 24669 Symmetrix Dedicated Gatekeepers ........................................................... 259

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS12

Page 13: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Figures

70 Test configuration used .............................................................................. 26071 Create an SRDF group from workload site A to secondary site B for

the first hop..................................................................................................... 29772 Create SRDF group between workload sites B and C for second hop . 29873 Create device pairs between workload sites A and B for the first hop 30174 Volumes to be paired between sites B and C for the second hop ......... 30175 Create device pairs between sites B and C specifying Adaptive Copy 303

13EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 14: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Figures

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS14

Page 15: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Title Page

Tables

1 Symmetrix VMAX specifications .................................................................. 542 Symmetrix VMAX with Enginuity 5874 enhancements............................ 563 Mainframe Host Component Enginuity requirements ........................... 1304 Mainframe Host Component requirements .............................................. 1345 Valid Cascaded SRDF mode combinations............................................... 1716 DMX-800 system write pending limit by system configuration ............ 2217 Returning to normal RPO following a 500-second remote link

outage ..............................................................................................................2378 Number of DSE slots that can be scheduled ............................................. 2429 Physical Director/Port Slot to MF SW SRDF Director ID numbers ...... 27010 Transmit Idle indicator values .................................................................... 286

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 15

Page 16: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Tables

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS16

Page 17: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Preface

This EMC Engineering TechBook describes how to design, implement, operate, and support SRDF/A with Multi-Session Consistency within a Mainframe z/OS environment utlizining SRDF Host Component.

As part of an effort to improve and enhance the performance and capabilities of its product line, EMC periodically releases revisions of its hardware and software. Therefore, some functions described in this TechBook may not be supported by all revisions of the software or hardware currently in use. For the most up-to-date information on product features, refer to the release notes for the respective product or visit the EMC website.

Audience This document is intended for storage management personnel, capacity planning personnel, database administrators, and disaster/restart personnel who are responsible for implementing SRDF/A with Multi-Session Consistency (MSC) or who may be considering SRDF/A as a viable long-distance replication solution.

Relateddocumentation

The following is a list of related documents that may assist readers with more detailed information on topics described in this TechBook. Many of these documents may be found the EMC Powerlink site (http://Powerlink.EMC.com).

◆ EMC Solutions Enabler

• EMC Solutions Enabler Release Notes (by release)

• EMC Solutions Enabler Support Matrix (by release)

• EMC Solutions Enabler Symmetrix Device Masking CLI Product Guide (by release)

• EMC Solutions Enabler Symmetrix Base Management CLI Product Guide (by release)

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 17

Page 18: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

18

Preface

• EMC Solutions Enabler Symmetrix CLI Command Reference (by release)

• EMC Solutions Enabler Symmetrix Configuration Change CLI Product Guide (by release)

• EMC Solutions Enabler Symmetrix SRM CLI Product Guide (by release)

• EMC Solutions Enabler Symmetrix Double Checksum CLI Product Guide (by release)

• EMC Solutions Enabler Installation Guide (by release)

• EMC Solutions Enabler Symmetrix CLI Quick Reference (by release)

• EMC Solutions Enabler Symmetrix TimeFinder Family CLI Product Guide (by release)

• EMC Solutions Enabler Symmetrix SRDF Family CLI Product Guide (by release)

◆ EMC Symmetrix Remote Data Facility (SRDF) Product Guide

◆ EMC Replication Manager product documentation

Conventions used inthis document

EMC uses the following conventions for special notices.

Note: A note presents information that is important, but not hazard-related.

A caution contains information essential to avoid data loss or damage to the system or equipment.

IMPORTANT

An important notice contains information essential to operation Typographical conventions

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 19: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Preface

EMC uses the following type style conventions in this document:

Normal Used in running (nonprocedural) text for:• Names of interface elements (such as names of windows, dialog

boxes, buttons, fields, and menus)• Names of resources, attributes, pools, Boolean expressions,

buttons, DQL statements, keywords, clauses, environment variables, functions, utilities

• URLs, pathnames, filenames, directory names, computer names, filenames, links, groups, service keys, file systems, notifications

Bold Used in running (nonprocedural) text for:• Names of commands, daemons, options, programs, processes,

services, applications, utilities, kernels, notifications, system calls, man pages

Used in procedures for:• Names of interface elements (such as names of windows, dialog

boxes, buttons, fields, and menus)• What user specifically selects, clicks, presses, or types

Italic Used in all text (including procedures) for:• Full titles of publications referenced in text• Emphasis (for example a new term)• Variables

Courier Used for:• System output, such as an error message or script • Specific user input (such as commands)• URLs, complete paths, filenames, prompts, and syntax when

shown outside of running text

Courier italic Used in procedures for:• Variables on command line• User input variables

< > Angle brackets enclose parameter or variable values supplied by the user

[ ] Square brackets enclose optional values

| Vertical bar indicates alternate selections - the bar means “or”

{ } Braces indicate content that you must specify (that is, x or y or z)

... Ellipses indicate nonessential information omitted from the example

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 19

Page 20: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

20

Preface

We'd like to hear from you!

Your feedback on our TechBooks is important to us! We want our books to be as helpful and relevant as possible, so please feel free to send us your comments, opinions and thoughts on this or any other TechBook:

[email protected]

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 21: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

1

This chapter presents these topics:

◆ Overview........................................................................................................... 22◆ Essential terms and concepts ......................................................................... 23◆ Disaster Recovery compared to Disaster Restart ........................................ 25◆ Design considerations for Disaster Recovery and Disaster Restart ......... 26◆ Historical overview of consistency technology........................................... 32

Background andIntroduction

Background and Introduction 21

Page 22: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

22

Background and Introduction

OverviewEMC® Symmetrix® Remote Data Facility (SRDF®) is an EMC business continuance solution that maintains a mirror image of data at the device level in Symmetrix systems located in physically separate sites. There are three modes of SRDF: synchronous, asynchronous, and adaptive copy. This document focuses on SRDF in asynchronous mode.

In SRDF/Asynchronous mode (SRDF/A), the Symmetrix system provides a dependent-write consistent point-in-time image on the secondary (target, R2) device, which is a short period of time behind the primary (source, R1) device. Managed in sessions, SRDF/A transfers data in cycles or delta sets to ensure that data at the secondary site is point-in-time dependent-write consistent. This mode requires an SRDF/A license.

The Symmetrix system acknowledges all writes to the primary devices in exactly the same way as other SRDF devices. Host writes accumulate on the primary side until certain conditions, defined later, are reached, and then those writes are transferred to the secondary devices in one delta set. Delta set operations for each cycle complete when that cycle’s data is committed by successfully destaging it to the secondary storage devices.

Because the writes are transferred in cycles, any duplicate remote writes written to a single delta set can be eliminated through a process known as Write Folding. Write Folding transfers only the last version of the changed track within any given single cycle.

SRDF/A provides a long-distance replication solution with minimal impact on performance while preserving data consistency with the host applications designed with dependent-write characteristics. This level of protection is intended for environments that always need a restartable copy of the data at a secondary site. Any partial delta sets of data are resolved during the cleanup process. This process preserves the dependent-write consistent point-in-time image on the secondary devices so that they (secondary devices) are, at most, two SRDF/A cycles behind the primary devices.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 23: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

Essential terms and conceptsThe terms described in this section are used throughout this document.

Dependent-write I/O

A dependent-write I/O is one that cannot be issued until a related predecessor I/O has completed. Most applications, and in particular database management systems (DBMS), have embedded dependent-write logic to ensure data integrity in the event of a failure in the host or server processor, software, storage system, or if an environmental power failure occurs affecting the whole complex.

Dependent-write consistency

Dependent-write consistency is a data state, where data integrity is guaranteed by dependent-write I/Os embedded in application logic. Database management systems are good examples of applications that utilize the dependent-write consistency strategy.

Database management systems must have protection schemes to guard against abnormal termination in order to successfully recover from one. The most common technique used guarantees a dependent write cannot be issued until a predecessor write has completed. Typically, the dependent write is a data or index write, while the predecessor write is a write to the log. Because the write to the log must be completed prior to issuing the dependent write, the application thread is synchronous to the log write (that is, it waits for that write to complete prior to continuing). The result of this kind of strategy is a dependent-write consistent database.

Point of consistency

A point of consistency is a point in time to which all current logical units of work have been completed and are in sync. It is a point to which data can be restored, recovered, or restarted in order to maintain integrity for a given set of data and applications.

Essential terms and concepts 23

Page 24: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

24

Background and Introduction

Restore Restore is the process that reinstates a prior copy of the data.

Rolling disasterA rolling disaster is a series of events that lead up to a complete disaster. For example, the loss of a communication link occurs prior to a site failure. Most disasters are rolling disasters; their duration may be only milliseconds or up to hours.

Transactional consistencyTransactional consistency is a DBMS state caused by the completion or roll back of all in-flight transactions.

Consistency groupA consistency group is a user-defined group of devices that requires consistency protection. A group may reside within a single Symmetrix system or may span multiple Symmetrix systems. Consistency means that the devices within the group act in unison to preserve dependent-write consistency of a database that may be distributed across multiple Symmetrix systems or multiple SRDF sessions within a single Symmetrix system.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 25: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

Disaster Recovery compared to Disaster RestartRecent stringent regulations governing the recovery and retention of data have spurred many businesses to reconsider the major objectives associated with Disaster Recovery (DR); in fact, a major paradigm shift is underway towards Disaster Restart. Traditionally, DR is thought of as Disaster Recovery, but EMC provides Disaster Restart with its consistency technology. The following sections define recovery and restart. The biggest difference is that recovery is a manual creation of the point of consistency after the disaster, while restart creates the dependent-write consistent point of consistency prior to the disaster.

Disaster Recovery

Disaster Recovery is the process of restoring a previous copy of the data and applying logs or other necessary processes to that copy to bring it to a known point of consistency.

Disaster Restart

Disaster Restart is the process of restarting dependent-write consistent copies of data and applications, using the implicit application of DBMS recovery logs during DBMS initialization to bring the data and application to a transactional point of consistency.

If a database is shut down normally, the process of achieving a point of consistency during restart requires minimal work. However, if the database terminates abnormally, the restart process becomes elongated depending on the number and size of in-flight transactions that occurred at the time of abnormal termination.

An image of the database created using EMC consistency technology while it is in operation, without conditioning the database, is in a dependent-write consistent data state, which is similar to that created by a local power failure. This is also known as a DBMS restartable image. The restart of this image transforms it to a transactionally consistent data state by completing committed transactions and rolling back uncommitted transactions during the normal database initialization process.

Disaster Recovery compared to Disaster Restart 25

Page 26: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

26

Background and Introduction

Design considerations for Disaster Recovery and Disaster RestartLoss of data, or loss of application availability, has a varying impact from one business to another. For instance, the loss of transactions for a bank could cost millions of dollars, whereas system downtime may not have a major fiscal impact. On the other hand, businesses that are primarily web-based may require 100 percent application availability in order to survive. The two factors, loss of data and loss of uptime, are the business drivers that are baseline requirements for a DR solution. When quantified, these two factors are more formally known as Recovery Point Objective (RPO) and Recovery Time Objective (RTO), respectively.

When evaluating a solution, the RPO and RTO requirements of the business need to be met. In addition, the solution needs to take into consideration operational complexity, cost, and the ability to return the whole business to a point of consistency. Each of these aspects is discussed in the following sections.

Recovery Point Objective (RPO)

The RPO is a point of consistency to which a user wants to recover or restart. It is measured in the amount of time from when the point of consistency was created or captured, to the time the disaster occurred. This time equates to the acceptable amount of data loss. Zero data loss (no loss of committed transactions from the time of the disaster) is the ideal goal, but the high cost of implementing such a solution must be weighed against the business impact and cost of a controlled data loss.

Some organizations, like banks, have zero data loss requirements. The database transactions entered at one location must be replicated immediately to another location. This can have a performance impact on the respective applications when the two locations are far apart. On the other hand, keeping the two locations close to one another may grossly minimize any performance impact, but might not protect against larger regional disasters like power outages, hurricanes, or earthquakes.

Defining the required RPO is usually a compromise between the needs of the business, the cost of the solution, and the probability of a particular event.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 27: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

Recovery Time Objective (RTO)RTO is the maximum amount of time allowed for recovery or restart to a specified point of consistency. This time involves many factors. For example, the time taken to:

◆ Provision power, utilities, and other physical environmental requirements

◆ Provision servers with the application and database software

◆ Configure the network

◆ Restore the data at the new site

◆ Roll the data forward to a known point of consistency

◆ Validate the data

Some delays can be reduced or eliminated by choosing certain DR options such as provisioning a hot site where host servers are preconfigured and always on standby. Also, if storage-based replication is used, the data at the hot site remains current, or close to current, therefore, the time taken to restore the data to a usable state is minimized.

As with RPO, each solution for RTO has a different cost profile. Defining the RTO is usually a compromise between the cost of the solution and the cost of database and application unavailability to the business.

Operational complexityThe operational complexity of a DR solution may be the most critical factor in determining the success or failure of a DR activity. The complexity of a DR solution can be considered as three separate phases:

1. Initial setup of the solution.

2. Maintenance and management of the implemented solution.

3. Execution of the DR plan in the event of a disaster.

While the first two phases above can be demanding on human resources, the third phase, execution of the DR plan, is where automation and simplicity must be the focus. In addition to the loss of servers, storage, networks, buildings, and so forth suffered in a disaster, key personnel may not be available, further complicating the

Design considerations for Disaster Recovery and Disaster Restart 27

Page 28: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

28

Background and Introduction

situation. If the complexity of the DR solution is such that skilled personnel with an intimate knowledge of all systems involved are required to restore, recover, and validate application and database services, the solution has a high probability of failure.

Multiple database environments can evolve into complex federated database architectures. In these federated database environments, reducing DR complexity is critical. Validation of transactional consistency within these complex federated database environments is time consuming, costly, and requires application and database familiarity. One of the reasons for this complexity is due to the heterogeneous databases and operating systems in these federated environments. Across multiple heterogeneous platforms, it is difficult to establish a common clock and therefore difficult to determine a business point of consistency across all participating platforms. This business point of consistency is created from an intimate knowledge of the transactions and data flow patterns.

Primary server activitySome DR solutions may require additional processing activity on the primary servers that impacts both response time and throughput of the production applications. This additional processing activity should be understood and quantified for any given DR solution to ensure that the impact to the business is minimized. For some DR solutions the impact may be continuous, for others it may be sporadic, with bursts of write activity followed by periods of inactivity.

Production impact Some DR solutions delay host activity while taking actions to propagate the changed data to another location. This action only affects write activity, and although the introduced delay may only be a few milliseconds, it can impact response time. This is especially true in a high-write environment. Synchronous solutions introduce delay into write transactions at the primary site; asynchronous solutions do not.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 29: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

Secondary server activitySome DR solutions require a target server at the remote location to perform DR operations. The target server incurs both software and hardware costs, requiring personnel to physically access it for basic operational functions like power on and power off. Ideally, the target server could be used for executing workloads such as development of test databases and applications. Some DR solutions require more target server processing cycles, while others require none.

Number of copies of dataDR solutions require replication of data in one form or another. Replication of a database and associated files can be as simple as making a tape backup and shipping the tapes to a DR site, or as sophisticated as asynchronous system-based replication. Some solutions may require multiple copies of the data to support DR functions. Additional copies of the data may also be required to perform testing of the DR solution, above those that support the DR process.

Distance for solutionDisasters, when they occur, have differing ranges of impact. For instance, a fire may destroy a building, an earthquake may demolish a city, or a tidal wave may devastate a region. The level of protection for a DR solution should address the probable disasters for a given location. For example, when protecting against an earthquake, the DR site should not be in the same city as the production site. For regional protection, the two sites need to be in two different regions. The separation distance required for the DR solution affects the type of DR solution that should be implemented.

Bandwidth requirementsOne of the largest costs for DR is in provisioning bandwidth for the solution. Bandwidth costs are an operational expense; this makes solutions that have reduced bandwidth requirements very attractive. It is important to recognize in advance the bandwidth consumption of a given solution to be able to anticipate the running costs. Incorrect

Design considerations for Disaster Recovery and Disaster Restart 29

Page 30: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

30

Background and Introduction

provisioning of bandwidth for DR solutions can have an adverse affect on production performance and can invalidate the overall solution.

Federated consistencyDatabases are rarely isolated islands of information with no interaction or integration with other applications or databases. Frequently, databases are either loosely or tightly coupled to other databases using triggers, database links, and stored procedures. Some databases provide information downstream for other databases using information distribution middleware; other databases receive feeds and inbound data from message queues and similar transactions. The result can be a complex interwoven architecture with multiple inter-relationships. This is referred to as federated database architecture.

With federated database architectures, making a DR copy of a single database without regard to other components invites consistency issues and creates logical data integrity problems. All components in a federated architecture need to be recovered or restarted to the same dependent-write consistent point in time to avoid these problems.

With this in mind, it is possible that point database solutions for DR, like log-shipping, do not provide the required business point of consistency in a federated database architecture. Federated consistency solutions guarantee that all components, databases, applications, middleware, and so forth are recovered or restarted to the same dependent-write consistent point in time.

Testing the solutionTested, proven, and documented procedures are also required for a DR solution. Too often the DR test procedures are operationally different from a true disaster set of procedures. It is crucial that the operational procedures are clearly documented. Customers should periodically execute the actual set of procedures for DR to become familiar with the procedures and their resulting outputs. This could be costly to the business because of the application downtime required to perform such a test, but is necessary to ensure validity of the DR solution.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 31: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

Cost The cost of doing DR can be justified by comparing it to the cost of not doing DR. What does it cost the business when the database and application systems are unavailable to users? For some companies this is easily measurable, and revenue loss can be calculated per hour of downtime or per hour of data loss.

Whatever the business, the DR cost is going to be an extra expense item and, in many cases, with little in return. The costs include, but are not limited to:

◆ Hardware (storage, servers, and maintenance)

◆ Software licenses and maintenance

◆ Facility leasing/purchase/maintenance

◆ Utilities

◆ Network infrastructure

◆ Personnel

Design considerations for Disaster Recovery and Disaster Restart 31

Page 32: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

32

Background and Introduction

Historical overview of consistency technologyRecognition by EMC of rolling disaster characteristics led to the development of the Consistency Group technology in 1998. Since then, EMC has developed many other consistency technologies and related solutions:

◆ Mainframe Enterprise SRDF Consistency Groups—Symmetrix, 1998

◆ Open Systems Enterprise SRDF Consistency Groups—Symmetrix, 1999

◆ Enterprise TimeFinder® Consistent Split—Symmetrix, 2000

◆ SRDF/AR—Symmetrix, 2001

◆ Enginuity™ Consistency Assist (ECA) for TimeFinder—Symmetrix, 2002

◆ SRDF/A—Symmetrix, 2003

◆ SRDF/A Multi-Session Consistency, 2003

◆ SRDF/Star, 2006

◆ SRDF/A Reserve Capacity Enhancements, 2007

SRDF/S Consistency GroupsZero data loss disaster recovery techniques tend to use straightforward database and application restart procedures. These procedures work well, when a disaster occurs, if all processing and data mirroring stop at the same instant in time at the production site. Such is the case when there is a site power failure.

In most cases, it is unlikely that all data processing ceases at the same instant in time. Computing operations can be measured in nanoseconds and even if a disaster takes only a millisecond to complete, many such computing operations could be completed during the elapsed time of the disaster. This is known as a rolling disaster. The specific period of time that makes up a rolling disaster could be milliseconds (in the case of a power outage), or minutes in the case of a fire. In both cases the DR site must be protected against data inconsistency.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 33: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

Rolling disasterProtection against a rolling disaster is required when the data for a database resides on more than one Symmetrix system or multiple SRDF sessions. Figure 1 depicts a dependent-write I/O sequence where a predecessor log write is happening prior to a page flush from a database buffer pool. The log device and data device are on different Symmetrix systems with different replication paths. Figure 1 demonstrates how rolling disasters can affect this dependent-write sequence.

Figure 1 Rolling disaster with multiple production Symmetrix systems

1. This example of a rolling disaster starts with a loss of the synchronous links between the primary Symmetrix system and the secondary Symmetrix system. This prevents the remote replication of data on the primary Symmetrix system.

2. The primary Symmetrix system, which is now no longer replicating, receives a predecessor log write of a dependent-write I/O sequence. The local I/O is completed, however, it is not replicated to the secondary Symmetrix system, and the tracks are marked as being ‘owed’ to the secondary Symmetrix system. There is nothing to prevent the predecessor log write from completing at the production host.

Host 1

DBMS

1

R1(Z)

R1(Y)

R1(X)

R2(Y)

R2(Z)

R2(X)

R1(C)

R1(B)

R1(A)

R2(B)

R2(C)

R2(A)

ICO-IMG-000027

Dataaheadof Log

X = DBMS DataY = Application DataZ = Logs

1. Rolling disaster begins2. Log write3. Dependent data write4. Inconsistent data

2

3

3

4

Historical overview of consistency technology 33

Page 34: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

34

Background and Introduction

3. Now that the predecessor log write has completed, the dependent data write is issued. This write is received on both the upper primary Symmetrix and the secondary Symmetrix system because the rolling disaster has not yet affected those communication links.

4. If the rolling disaster ended in a complete disaster, the condition of the data at the restart site is such that it would create a “data ahead of log” condition, which is an inconsistent state for a database. The severity of the situation is that when the database is restarted, and an implicit recovery process is undertaken, the implicit recovery process may not detect these inconsistencies. Someone who is familiar with the transactions running at the time of the rolling disaster may be able to detect these inconsistencies, or database utilities could be run to detect some of these inconsistencies.

A rolling disaster can happen in such a manner that data links providing remote mirroring support are disabled in a staggered fashion, while application and database processing continues at the production site. The sustained replication during the time when some primary Symmetrix systems are still communicating with their secondaries through their respective links, while other primary Symmetrix systems are not (due to link failures), can cause data integrity exposures at the recovery site. Some data integrity problems caused by the rolling disaster cannot be resolved through normal database restart processing and may require a full database recovery using appropriate backups, journals, and logs. A full database recovery elongates overall application restart time at the recovery site (higher RTO).

Protection against a rolling disasterSRDF Consistency Group (SRDF/CG) technology provides protection against rolling disasters. A consistency group is a set of Symmetrix system volumes spanning multiple SRDF sessions or multiple Symmetrix systems or both, which replicate as a logical group to other Symmetrix systems using Synchronous SRDF (SRDF/S). It is not a requirement to span multiple SRDF sessions or Symmetrix systems or both when using consistency sessions. Consistency group technology guarantees that if a single primary volume is unable to replicate to its respective secondary volume for any reason, then all the volumes in the group stop replicating. This

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 35: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

ensures that the image of the data on the secondary Symmetrix system is consistent from a point-in-time dependent-write perspective.

Figure 2 depicts a dependent-write I/O sequence where a predecessor log write is happening prior to a page flush from a database buffer pool. The log device and data device are on different Symmetrix systems with different replication paths. Figure 2 demonstrates how rolling disasters can be prevented using SRDF Consistency Group technology.

Figure 2 Rolling disaster with SRDF consistency group protection

1. Consistency group protection is defined containing volumes X, Y and Z on the source Symmetrix systems. This consistency group definition must contain all of the devices that need to maintain dependent-write consistency and reside on all participating hosts involved in issuing I/O to these devices. A mix of CKD (mainframe) and FBA (UNIX/Windows) devices can be logically grouped together. In some cases, the entire processing environment may be defined as a consistency group to ensure point-in-time dependent-write consistency.

Host 1

DBMS

IOS/PowerPath

SCF/SYMAPI

Solutions Enablerconsistency group

Host 2

DBMS

IOS/PowerPath

SCF/SYMAPI

Solutions Enablerconsistency group

2

R1(Z)

R1(Y)

R1(X)

R2(Y)

R2(Z)

R2(X)

R1(C)

R1(B)

R1(A)

R2(B)

R2(C)

R2(A)

ICO-IMG-000002

Suspend R1/R2relationship

DBMSrestartablecopy

E-ConGroupdefinition(X,Y,Z)

E-ConGroupdefinition(X,Y,Z)

X = DBMS dataY = Application dataZ = Logs

1. ConGroup protection2. Rolling disaster begins3. Log write4. ConGroup "trip"5. Suspend R1/R26. Dependent data write7. Dependent write consistent

1

1

3

45

6

7

Historical overview of consistency technology 35

Page 36: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

36

Background and Introduction

2. The rolling disaster described above begins preventing the replication of changes from volume Z to the remote site.

3. The predecessor log write occurs to volume Z, causing a consistency group (ConGroup) trip.

4. A ConGroup trip holds the I/O that could not be replicated along with all of the I/O to the logically grouped devices. The I/O is held by either Input-Output Supervisor (IOS) on the z/OS host, or by PowerPath® on the UNIX or Windows host, or by Symmetrix Enginuity Consistency Assist (SRDF-ECA). It is held long enough to issue two I/Os per affected Symmetrix system. The first I/O puts the devices into a pending state.

5. The second I/O performs the suspend operation of the R1/R2 relationship for the logically grouped devices that immediately disables all replication to the remote site. This allows other devices outside the affected group to continue replicating, provided the communication links are available.

6. After the R1/R2 relationship is suspended, all deferred write I/Os are released allowing the predecessor log write to complete to the host. The DBMS issues the dependent data write and arrives at the primary Symmetrix system, but is not replicated to the secondary Symmetrix system.

7. If a complete primary site failure occurred from this rolling disaster, dependent-write consistency at the remote site is preserved. If a complete primary site disaster had not occurred and the failed links were activated again, The consistency group replication could be resumed once a synchronous state is achieved. Once the SRDF process reaches synchronization, the point-in-time dependent-write consistent copy is achieved at the remote site. Creating a copy of the dependent-write consistent image while the resume takes place is strongly recommended.

TimeFinder consistent splitTimeFinder consistent split allows the dependent-write consistent copy of data to be created within the local Symmetrix system, or multiple local Symmetrix systems. TimeFinder uses a feature known as Enginuity Consistency Assist (ECA) to defer the writes while an instant split is issued. The result is a point-in-time dependent-write consistent image on a set of Business Continuance Volumes (BCVs),

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 37: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

clones, or snap devices. This is done without the database utilities holding the database I/O or shutting down the databases. This image is considered a restartable copy.

Figure 3 lists the steps of a consistent split process flow.

Figure 3 TimeFinder consistent split process flow

1. This example starts with a dependent-write transaction. The dependent-write I/O sequence starts with a predecessor log write followed by a dependent data write. TimeFinder consistent split preserves the dependent write I/O sequence.

2. When the consistent split occurs, the writes are deferred while the instant split is issued. Once the instant split completes, the writes are allowed to flow to these devices. The holding of the writes makes the consistent split appear as a single ‘atomic” operation.

3. When the predecessor log write is issued, even though it may be issued during the consistent split, it is deferred during the consistent split, and allowed to complete after the consistent split occurs. The predecessor log write looks like a slightly elongated I/O to the application or database.

4. The result of the above process flow is the ability to guarantee the dependent-write I/O principle without using database technologies.

Host 1

DBMS

Symmetrix controlFacility/TimeFinder

STD(Z)

STD(Y)

STD(X)

BCV(Z)

BCV(Y)

BCV(X)

Solutions EnablerTimeFinder

ICO-IMG-000195

Composite Groupdefinition (X,Y,Z)

X = DBMS dataY = Application dataZ = Logs

13

1. Begin transaction2. Consistent split a. Defer write I/Os b. Instant split c. Resume write I/Os3. Data page to be destaged a. Predecessor log write b. Dependent data write4. Dependent write consistent image on BCVs3a

2c

2a

2b3b

EnginuityConsistency

Assist

Historical overview of consistency technology 37

Page 38: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

38

Background and Introduction

SRDF/AR single hopSRDF Automated Replication, or SRDF/AR, is a continuous movement of dependent-write consistent data to a remote site using SRDF adaptive copy mode and TimeFinder consistent split technology.

TimeFinder BCVs are used to create a point-in-time dependent-write consistent image of the data to be replicated. Because the BCVs also have an R1 personality, SRDF can be used in adaptive copy mode to replicate the data from the BCVs to the target site. Since the BCVs are not being updated by the host system, replication completes in a predictable length of time. The elapsed time for the BCV remote replication process depends on factors such as: the bandwidth of the network “pipe” between the two locations, the distance between the two locations, the quantity of changed data tracks being replicated, and the locality of reference of the changed tracks.

On the remote Symmetrix system, another BCV copy of the data is made using data on the secondary (R2) devices. The BCV copy of the data in the remote Symmetrix system is commonly called the “gold” copy of the data. This “gold” copy is necessary because the next SRDF/AR iteration replaces the R2 image in a non-ordered fashion. If a disaster occurred while the R2s were being synchronized, there would not be a valid copy of the data at the restart site. The “gold” copy addresses that scenario. The whole process then repeats as long as SRDF/AR is in operation.

With SRDF/AR, there is no host impact. Writes are acknowledged immediately when they are received in the cache of the primary Symmetrix system. Figure 4 on page 39 depicts the process flow.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 39: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

Figure 4 SRDF/AR single hop replication process flow

1. Writes are received into the primary Symmetrix system cache and are acknowledged as “completed” to the host system. The BCV/R1s have been synchronized with the standard (STD) devices at this point. A consistent split command is executed against the STD-BCV pairing to create a point-in-time (PIT) image of the data on the BCVs.

2. SRDF transmits the data on the primary BCV/R1s to the secondary R2s in the remote Symmetrix system.

3. When the primary BCV/R1s are synchronized with the secondary R2s, they are re-established with the standard devices in the primary Symmetrix system. This causes the device relationships between the primary and secondary devices to become suspended on the SRDF link. At the same time, an incremental establish operation is performed on the secondary Symmetrix system to create a “gold” copy on the BCVs in that Symmetrix system.

4. When the BCVs in the secondary Symmetrix system are fully synchronized with the R2s, they are split, and the configuration is ready to begin another cycle.

BCV/R1

BCV/R1

STD

STD

R2

R2

BCV

BCV DB2 UDB

1. Consistent split on source2. SRDF mirroring resumed3. Incremental establish both4. BCV split on target5. Cycle repeats based on cycle parameters

Source/R1 Target/R2

ICO-IMG-000196

1

5

2

3 4

Historical overview of consistency technology 39

Page 40: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

40

Background and Introduction

5. The cycle repeats (steps 1–4) based on configuration parameters that can specify the cycles to begin at specific times, specific intervals, or to run continuously whenever the previous cycle completes.

It should be noted that cycle times for SRDF/AR are usually in the minutes-to-hours range. The RPO is double the cycle time in a worst-case scenario. This is suitable for applications with relaxed RPOs.

The added benefit of having a longer cycle time is that the data locality of reference will likely increase since there is a much greater chance of a particular data location being updated more than once in a one-hour interval than in a one-minute interval. This increase in data locality of reference translates in bandwidth savings using Write Folding technology developed by EMc.

Instantiation of the database, a baseline full copy of all the volumes that will participate in the SRDF/AR replication, must be completed before SRDF/AR can be started. This requires the following: a full TimeFinder establish to the BCVs in the primary system, a full SRDF establish of the primary devices (BCV/R1s) to the secondary devices (R2s), and a full TimeFinder establish of the secondary devices (R2s) to the BCVs in the target system. There is an option to automate the initial setup.

As with other SRDF solutions, SRDF/AR does not require a host at the DR site. The commands to update the secondary devices (R2s) and manage the TimeFinder synchronization of the BCVs at the recovery site are all managed in-band from the production site.

How to restart in the event of a disasterIn the event of a disaster, it is necessary to determine if the most current copy of the data is located on either the recovery site’s BCVs or the recovery site’s secondary devices (R2s), since there is a dependency as to where in the replication cycle the disaster occurs.

SRDF/AR multi-hopSRDF Automated Replication multi-hop, or SRDF/AR multi-hop, is an architecture that provides long distance replication with zero seconds of data loss through use of a bunker Symmetrix system. Production data is replicated synchronously to the bunker Symmetrix system, which is within 200 km of the production Symmetrix system.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 41: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

This allows synchronous replication, but is far enough away that disasters at the primary site may not affect it. Typically, the bunker Symmetrix system is placed in a hardened computing facility.

BCVs in the bunker Symmetrix system use consistent split technology periodically to synchronize the tertiary Symmetrix system devices. This provides a dependent-write consistent point-in-time image of the data in the secondary system. These bunker site BCVs also have a primary SRDF (R1) personality, which means that SRDF in adaptive copy mode can be used to replicate the data from the bunker Symmetrix system to the recovery site. Since the data on the BCV primary devices is not changing because they are split from the secondary devices, the replication can be completed in a predictable length of time. The replication time depends on the network bandwidth available between the bunker location and the DR location, the distance between the two locations, the quantity of changed data, and the locality of reference of the changed data. On the tertiary Symmetrix system, another BCV copy of the data is made using the secondary devices (R2s). This BCV copy of the data is commonly called the “gold” copy of the data. The process constantly repeats.

With SRDF/AR multi-hop, there is minimal host impact at the production site because writes are acknowledged (to the production site) as completed when they are received in the cache of the bunker Symmetrix system. Figure 5 on page 42 depicts the process flow.

Historical overview of consistency technology 41

Page 42: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

42

Background and Introduction

Figure 5 SRDF/AR multi-hop replication process flow

1. BCVs are synchronized and consistently split against the secondary devices (R2s) in the bunker Symmetrix system. The write activity is momentarily suspended on the source Symmetrix system prior to this process in order to obtain a dependent-write consistent point-in-time image on the secondary devices (R2s) in the bunker Symmetrix system. This creates a dependent-write consistent point-in-time copy of the data on the BCVs attached to those R2s. These BCVs are also primary (R1) devices replicating to the secondary (R2) devices at the DR site.

2. SRDF transmits the data on the bunker primary devices (BCV/R1s) to the secondary devices (R2s) in the DR Symmetrix system.

3. When the primary (BCV/R1) devices are synchronized with the secondary (R2) devices in the target Symmetrix system, the bunker primary devices (BCV/R1s) are established again with the secondary devices (R2s) in the bunker Symmetrix system. This causes SRDF to be suspended between the bunker Symmetrix system and the DR Symmetrix system. At the same time, an

BCV/R1

BCV/R1

R1

R1

R2

R2

R2

R2

BCV

BCV

Short distance Long distance

DB2 UDB

1. Consistent split in bunker Symmetrix2. SRDF mirroring resumed3. Incremental establish both4. BCV split on target5. Cycle repeats based on cycle parameters

Production Bunker

ICO-IMG-000197

1 5 2

3

DR

3

4

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 43: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

incremental establish operation is performed on the DR Symmetrix system to create a “gold” copy on the BCVs in that system.

4. When the BCVs in the DR Symmetrix system are fully synchronized with the secondary devices (R2s), they are split and the configuration is ready to begin another cycle.

5. The cycle repeats based on configuration parameters that specify whether the cycles are to begin at specific times, specific intervals, or to run immediately after the previous cycle completes.

It should be noted that even though cycle times for SRDF/AR multi-hop are usually in the minutes-to-hours range, the most current data is always in the bunker Symmetrix system. Unless there is a regional disaster that destroys both the primary site and the bunker site, the bunker Symmetrix system will transmit all data to the remote DR site. This means zero data loss at the point at which the rolling disaster begins or an RPO of 0 s. This is the preferred solution for applications with a requirement of zero data loss and long distance DR.

An added benefit of having a longer cycle time means that the data locality of reference will likely increase since there is a much greater chance of a track being updated more than once in a one-hour interval than in a one-minute interval. The increase in locality of reference results in reduced bandwidth requirements for the network segment between the bunker Symmetrix system and the DR Symmetrix systems. Longer cycle times may elongate the RPO.

Before SRDF/AR can be initiated, initial instantiation of the database must be run as follows:

1. Perform an establish of the primary devices (R1s) to the secondary devices (R2s) in the bunker Symmetrix system.

2. Once the establish has completed, the primary and secondary devices need to be kept in synchronous mode.

3. Perform a full establish from the secondary devices (R2s) to the BCVs in the bunker Symmetrix system.

4. Perform a full SRDF establish of the primary devices (BCV/R1s) to the secondary devices (R2s) in the DR Symmetrix system.

5. Perform a full establish of the secondary devices (R2s) to the BCVs in the DR Symmetrix system.

There is an option to automate this process of instantiation.

Historical overview of consistency technology 43

Page 44: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

44

Background and Introduction

SRDF/A and SRDF/A MSCSRDF/A, or asynchronous SRDF, is a method of replicating production data changes from one Symmetrix system to another, using delta set technology. Delta sets are the collection of changed blocks grouped together by time interval. The default time interval is 30 seconds but may vary depending upon host system write workload, available network bandwidth, and the particular Symmetrix system configuration at each site. The delta sets are then transmitted from the primary site to the secondary site in the order they were created. SRDF/A preserves the point-in-time dependent-write consistency of the database at all times at the recovery site.

The distance between the source and target Symmetrix system is unlimited and there is minimal host impact. Writes are acknowledged immediately when they are received in the cache of the primary Symmetrix system. SRDF/A is only available on the Symmetrix DMX™ family of Symmetrix systems. Figure 6 illustrates the process flow.

Figure 6 SRDF asynchronous replication process flow

R1N

R1N

N-1

N-1

R2N-2

R2N-2

N-1

N-1

DB2 UDB

1. Capture delta set collects application write I/O2. Delta set switch - capture dependent write consistent copy3. Transmit delta set sends final set of writes to target on receive delta set4. Apply delta set - once receive complete, data applied to disk5. Cycle repeats

Source/R1 Target/R2

ICO-IMG-000198

1 5 2 3 4

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 45: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

1. Writes are received into the source Symmetrix system cache. The host receives immediate acknowledgement that the write is complete. Writes are gathered into the capture (N) delta set initially for 30 seconds.

2. The active cycle on the primary Symmetrix system contains the current host writes or N data version in the capture delta set. A delta set switch occurs and the current capture delta set becomes the transmit delta set. A new empty capture delta set is created.

3. The inactive cycle contains the N-1 data version that is transferred using SRDF/A from the primary Symmetrix system to the secondary Symmetrix system. The primary inactive cycle is the transmit delta set and the secondary Symmetrix system inactive cycle is the receive delta set.

4. The apply delta set marks all the changes in the delta set against the appropriate volumes as invalid tracks and begins destaging the blocks to disk.

5. The cycle repeats continuously.

Dependent-write consistency is ensured within SRDF/A by the host adapter obtaining the active cycle number from a single location in global memory. The active cycle number is then assigned to each I/O at the beginning of the I/O and retains that cycle number even if a cycle switch occurs during the life of that I/O.

This results in the cycle switch process being atomic for dependent-write sequences, even though it is not physically an atomic event across a range of volumes. As a result, two I/Os with a dependent relationship between them can either be in the same cycle, or in subsequent cycles.

Enginuity 5670.50 added SRDF/A support for control of multiple Symmetrix systems, provided there is a single SRDF group per Symmetrix system. Beginning with Enginuity 5x71 for mainframe and open systems, SRDF/A is supported in configurations where there any combination of the following exist:

◆ Multiple primary Symmetrix systems

◆ Multiple primary Symmetrix SRDF sessions connected to multiple secondary Symmetrix systems

◆ Multiple secondary Symmetrix system SRDF sessions

Historical overview of consistency technology 45

Page 46: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

46

Background and Introduction

SRDF/A MSC configurations can also support mixed open systems and mainframe data controlled within the same SRDF/A MSC session.

Achieving data consistency across multiple SRDF/A sessions requires that the cycle switch process described earlier in this chapter be coordinated among the participating Symmetrix system SRDF/A sessions, multiple Symmetrix systems, or both. It is also required that the cycle switch occurs during a very brief time period when no host system writes are being serviced by any of the Symmetrix systems. Achieving this requires a single coordination point to drive the cycle switch process in all participating Symmetrix systems; this function is provided by the SRDF control software running on the host.

From a single Symmetrix system perspective, I/O is processed exactly the same way in SRDF/A MSC mode as in single session mode described previously.

SRDF/A with a single source Symmetrix systemBefore the asynchronous mode of SRDF can be established, initial instantiation of the database must be completed. In other words, a baseline full copy of all the volumes that will participate in the asynchronous replication must first be obtained. This is usually accomplished using the adaptive copy mode of SRDF.

SRDF/A with multiple source Symmetrix systemsWhen a database spans multiple Symmetrix systems and SRDF/A is used for long distance replication, separate software must be used to manage the coordination of the delta set boundaries between the participating Symmetrix systems, and to stop replication if any of the volumes in the group cannot replicate for any reason. The software must ensure that all delta set boundaries on every participating Symmetrix system in the configuration are coordinated to give a dependent-write consistent point-in-time image of the database.

SRDF Multi-Session Consistency (MSC) is an EMC technology that provides consistency across multiple SRDF/A sessions within the same Symmetrix system or multiple Symmetrix systems or both. MSC is available with Enginuity 5671 microcode and above, and also with ResourcePak® Base V5.4 and above. SRDF/A with MSC is supported by a started task that performs cycle-switching and cache recovery operations across all SRDF/A sessions in the MSC group. This ensures that a dependent-write consistent secondary (R2) copy of the database exists at the remote site at all times. The software monitors and manages the MSC consistency group. It is

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 47: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

recommended to have two LPARs running the started task for redundancy; these hosts need to have visibility and gatekeepers to all Symmetrix systems involved in the MSC consistency group definition. At the time of an interruption (SRDF link failure, for instance), MSC analyzes the status of all SRDF/A sessions and either commits the last cycle of data to the secondary (R2) devices, or discards it.

Note: There is no requirement for a host at the remote site during the asynchronous replication. The secondary Symmetrix system itself manages the inbound writes, and updates the appropriate devices in the system.

How to restart in the event of a disasterIn the event of a disaster when the primary source Symmetrix system is lost, database and application services must be run from the DR site. A host at the DR site is required for restart. However, before restart can be attempted, the secondary (R2) devices must be write-enabled to the host.

Once the data is available to the host, the database can be restarted. Transactions that were committed but not completed are rolled forward and completed using the information in the active logs. Transactions that have updates applied to the database but not committed are rolled back. The result is a transactionally consistent database.

SRDF/A Reserve Capacity enhancements: Introduction to Transmit Idle and Delta Set Extension

SRDF/A Reserve Capacity enhances the ability of SRDF/A to maintain an operational state when encountering network resource constraints that would have previously suspended SRDF/A operations. With SRDF/A Reserve Capacity functions enabled, additional resource allocation can be applied to address temporary workload peaks, periods of network congestion, or even transient network outages.

SRDF/A Transmit Idle is a Reserve Capacity enhancement available at Enginuity level 5x71 and higher to EMC’s SRDF/A feature that provides SRDF/A with the capability of dynamically and transparently extending the capture, transmit, and receive phases of the SRDF/A cycle while masking the effects of an “all SRDF links lost” event. Without the SRDF/A Transmit Idle enhancement, an “all

Historical overview of consistency technology 47

Page 48: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

48

Background and Introduction

SRDF links lost” event would typically result in the abnormal termination of SRDF/A. The SRDF/A Transmit Idle enhancement has been specifically designed to prevent this event from occurring.

Beginning with Enginuity level 5772, there is an additional option for managing the buffering of delta set data: SRDF/A Delta Set Extension (DSE). DSE provides a mechanism for augmenting the cache-based delta set buffering mechanism of SRDF/A with a disk-based buffering ability. This extended delta set buffering ability may allow SRDF/A to ride through larger and more prolonged SRDF/A throughput imbalances than would ordinarily be possible with cache-based delta set buffering alone.

Customers can configure DSE for any SRDF/A session and also within any configuration in which SRDF/A is a participant, including SRDF/Star and Concurrent SRDF. DSE is designed to preserve the major benefits of SRDF/A, including impact on host-write response time that is typically not measurable, the use of write folding to reduce remote link bandwidth requirements, and the options SRDF/A provides for managing consistency.

Note: SRDF/A Reserve Capacity enhancements will not fix a fundamentally unbalanced configuration.

Transmit Idle and DSE work together to maximize availability of continuous remote replication operations while minimizing operational overhead.

SRDF enhancements in Enginuity 5773Enginuity™ level 5773 is the latest Enginuity release supporting the Symmetrix Direct Matrix Architecture® DMX-3 and DMX-4 storage arrays. It contains new features that provide increased storage utilization and optimization, enhanced replication capabilities, and greater interoperability and security, as well as multiple ease-of-use improvements.

The following enhancements affect the usability and security of SRDF:

◆ Ability to move dynamic SRDF devices between SRDF groups without requiring a full resynchronization

◆ Timestamp representation of SRDF/A R2 data and configuration of Concurrent SRDF in static SRDF environments

◆ IPv6 and IPSec support for SRDF over Gigabit Ethernet

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 49: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Background and Introduction

The following features were either added or significantly enhanced with the release of Enginuity 5773:

◆ Cascaded SRDF — New three-site disaster recovery configuration where data from a primary site is synchronously replicated to a secondary site, and then asynchronously replicated to a tertiary site.

◆ Moving dynamic RDF device pairs — Allows moving a dynamic SRDF device pair from one SRDF group to another without the deletion of the existing dynamic RDF pair; thus avoiding a full resynchronization.

◆ SRDF/A R2 timestamp representation — SRDF/A “time that R2 is behind R1” information is now available when querying the SRDF/A devices from a host connected to the R2 Symmetrix DMX.

◆ Configure static Concurrent RDF — Solutions Enabler 6.5 has introduced new syntax for the symconfigure command to allow Concurrent SRDF pairs to be managed for static SRDF.

◆ IPv6 support — Enginuity level 5773 introduces support for IPv6 on Symmetrix DMX-3 and DMX-4 storage arrays for SRDF on Gigabit Ethernet directors.

◆ IPSec support — Enginuity level 5773 offers IPSec support for SRDF on the Symmetrix DMX-3 and DMX-4 Gigabit Ethernet directors.

Historical overview of consistency technology 49

Page 50: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

50

Background and Introduction

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 51: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

2

This chapter presents these topics:

◆ EMC Symmetrix VMAX Series with Enginuity ............................ 52◆ Symmetrix DMX hardware and EMC Enginuity features ........... 60◆ ResourcePak Base for z/OS.............................................................. 66◆ SRDF family of products for z/OS .................................................. 70◆ TimeFinder family products for z/OS ............................................ 86

EMC FoundationProducts

EMC Foundation Products 51

Page 52: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

52

EMC Foundation Products

EMC Symmetrix VMAX Series with EnginuityThe EMC Symmetrix VMAX Series with Enginuity is the newest addition to the Symmetrix family, and the first high-end system purpose-built for the virtual data center. Based on the Virtual Matrix Architecture, Symmetrix VMAX scales performance and capacity to unprecedented levels, delivers nondisruptive operations, and greatly simplifies and automates the management and protection of information. Advanced tiering via Enterprise Flash, Fibre Channel, and SATA drives allows users to ensure that the right data is on the right storage tier at the right cost.

At the heart of the Symmetrix VMAX system is the Virtual Matrix Architecture, designed to break through the physical boundaries of fixed backplane storage architectures - in a system that can scale to dozens of PBs, support thousands of virtual servers, deliver millions of IOPs, and provide 24x7xforever availability.

The advantages of this unique scale-out architecture, along with new Enginuity operating environment capabilities are critical for customers transitioning to more of a virtual data center infrastructure. The ability to dynamically scale, while dramatically simplifying and automating operational tasks, is critical to addressing the infrastructure requirements and driving down cost in both virtual and physical deployments.

Design overview The Symmetrix VMAX design is based on a highly available VMAX Engine with redundant CPU, memory, and connectivity on two directors for fault tolerance. Symmetrix VMAX Engines connect to and scale-out linearly through the Virtual Matrix Architecture, which allows resources to be shared within and across Symmetrix VMAX Engines (Figure 7 on page 53). To meet growth requirements, additional engines can be added nondisruptively for efficient and dynamic scaling of capacity and performance that is available to any application on demand.

The Symmetrix VMAX is the only high-end platform with multi-core processors providing maximum performance and energy-efficient capabilities in each Symmetrix VMAX Engine. This unique feature allows entry-level Symmetrix VMAX configurations to deliver significantly more performance in a smaller footprint than any other storage array.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 53: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

Figure 7 EMC Symmetrix VMAX Series with Enginuity

Available models EMC introduced the Symmetrix VMAX in two different models:

◆ The entry point Symmetrix VMAX SE system

◆ The highly scalable Symmetrix VMAX system

The Symmetrix VMAX SE is a single engine storage system designed for organizations that require the performance and replication options of a Symmetrix without enterprise-level capacity or scalability objectives.

The Symmetrix VMAX is the high-end storage array that scales from a single engine configuration with a dedicated system cabinet and a single storage bay to a larger eight-engine configuration with up to 10 storage bays capable of holding 2,400 physical disk drives. If customers need to increase the number of host connections or capacity or increase performance, on-line system upgrades are achieved by adding engines. Table 1 on page 54 lists Symmetrix VMAX specifications.

Virtual matrix interconnect Sym metrix V-M ax engines

EMC Symmetrix VMAX Series with Enginuity 53

Page 54: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

54

EMC Foundation Products

Architecture details Each Symmetrix VMAX Engine contains two directors with extensive CPU processing power, global cache memory, and a Virtual Matrix Interface for inter-director communications.

The Symmetrix VMAX Engines are configurable to provide maximal and flexible host connectivity and back-end physical drive loops. Front-end port configurations are Fibre Channel, iSCSI and FICON for host connections and Fibre Channel and Gigabit Ethernet for remote replication. Speeds auto-negotiate between 1 and 4 Gigabit per second based on the connection types.

The processing power is provided by dual quad-core 2.33 GHZ Xeon processors from Intel. Each director includes up to 64 GB of memory using eight Cache Memory Modules. Current memory module sizes are 2 GB, 4 GB, and 8 GB, which provide a total capacity of 16, 32, and 64 GB per director or a maximum of 128 GB of physical memory per Symmetrix VMAX system.

Table 1 Symmetrix VMAX specifications

Feature VMAX SE VMAX

CPU processing power 16 2.33 GHz processor cores

Up to 128 2.33 GHz processor cores

Max. logical devices 42,000 64,000

Max. physical memory 128 GB 1 TB

FC/FICON/GigE/iSCSI ports

16 / 8 / 8 / 8 128 / 64/ 64 / 64

Max back-end ports 16 x 4 Gb/s FC 128 × 4 Gb/s FC

Max drive count 360 2,400

Enterprise Flash Drives 200/400 GB (4 Gb) 200/400 GB (4 Gb)

15,000 rpm drives 146, 300, 450 GB 146, 300, 450 GB

10,000 rpm drives 400 GB 400 GB

7,200 rpm drives 1 TB SATA 1 TB SATA

Max. Usable Capacity 303 TB Up to 2 PB

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 55: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

To enable multiple director boards and instances to work together as a single system, a high-bandwidth, low latency, nonblocking communication matrix is used. The Symmetrix VMAX Virtual Matrix Interconnect is implemented using the industry-standard Rapid IO (RIO) protocol through two redundant switching elements. On the physical director board, two separate sets of eight lanes of PCI-Express are converted to RIO by the Virtual Matrix Interface. The Matrix Interface Board Enclosure (MIBE) contains two independent matrix switches that provide point-to point communications between directors. This redundant matrix is used for mirrored writes across directors and for other inter-director signaling and communications.

The Symmetrix VMAX Series provides a distributed architecture that provides near infinite scalability while maintaining a single system to manage. Through the use of the high-speed interconnect, the Symmetrix VMAX provides the building blocks for EMC high-performance storage systems. This has transformed enterprise storage and is the baseline for how current and future storage systems will be measured.

While the benefits of moving to a distributed architecture are numerous and well understood, it is also a fact that distributed system are typically very difficult to manage. The glue that keeps the distributed architecture of the Symmetrix VMAX operating as a single system is the Enginuity operating environment.

Enginuity The Enginuity operating environment provides the intelligence that controls all components in a Symmetrix VMAX array. It coordinates real-time events related to the processing of production data.

It applies self-optimizing intelligence to deliver the ultimate performance, availability, and data integrity required in a platform for advanced storage functionality.

EMC Symmetrix VMAX Series with Enginuity 55

Page 56: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

56

EMC Foundation Products

Table 2 highlights some of the new features with Symmetrix VMAX and Enginuity release level 5874.

Table 2 Symmetrix VMAX with Enginuity 5874 enhancements (page 1 of 3)

Feature Description

Storage Provisioning

Some of the more complex routines of Enginuity are to manage the internal flow of events while maintaining flexible scalability and intelligent resource utilization for the storage capacity of the Symmetrix VMAX system. A vital consideration that enables Enginuity to balance the resource utilization is the placement of data on storage devices. This makes storage provisioning a crucial part to an optimal configuration. Storage provisioning requires a number of steps that are performed on the host, SAN, and on the storage system. On the storage system, steps include device creation, mapping of the devices to front-end director ports, and masking the devices to each host bus adapter (HBA) on each server. While the process is not difficult, provisioning can be cumbersome because it is a multi-step process, and must be performed for each server and for each HBA.With the Symmetrix VMAX there is a new facility for storage provisioning referred to as Auto-provisioning Groups. This greatly simplifies the process of storage provisioning and reduces the time it takes to initially provision storage and subsequently add additional capacity and or change connectivity by adding or removing HBAs or front-end ports.The core concept of Auto-provisioning Groups is a logically grouping of related initiators, front-end ports, and storage devices and the creation of views that associates the storage devices to the front-end ports and the initiators by performing the necessary device mapping and masking in a single operation.An Initiator Group contains all HBAs within a single server, or group of servers that share the same storage. A Port Group contains one or more front-end directors, and a Storage Group contains all devices used by an application, server, or cluster of servers. When the Masking View is created, the required mapping and masking is performed automatically.Once the Masking View is created, reprovisioning and adding additional capacity is simply a matter of adding devices to the Storage Group and the Masking View is updated and again, the required mapping and masking is performed automatically. Similarly, modifying connectivity is performed by adding or removing HBAs to the Initiator Group and or ports to the Port Group.

Virtual Provisioning

Maintenance windows for production environments are constantly being squeezed. Administrators no longer have the time to carefully plan the storage provisioning for preferred placement to keep an optimal configuration. This time constraint also means that much of the allocated storage could remain unused and thus wasted, which adds to the overall operational cost. To help address these storage management challenges EMC introduced Virtual Provisioning, which is based on industry-known "thin provisioning" concepts, whereby more capacity can be presented to a host than is used at the outset and multiple thin devices or applications can consume storage only as needed from a common storage pool.Virtual Provisioning allows storage to be provisioned independently of the physical storage infrastructure. By creating a thin device that initially is larger than required by the application, organizations can reduce the need to re-provision new storage later on. Symmetrix thin devices are logical devices that can be used like any standard Symmetrix devices but unlike traditional devices, thin devices do not need to have physical storage completely allocated at the time the device is created or presented to a host. The thin device is not usable until it has been bound to a shared storage pool known as a thin pool. Multiple thin devices may be bound to any given thin pool. The thin pool is comprised of data devices that provide the actual physical storage to support the thin device allocations.Virtual Provisioning can reduce the amount of unused physical storage by having multiple thin devices share a single storage pool, drawing physical storage only as needed. Storage administrators can reduce or avoid the pre-allocation of physical storage to applications thereby reducing storage costs and energy consumption.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 57: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

Tiering The Symmetrix VMAX systems offer many choices for drive capacity and performance requirements. Enterprise Flash Drives are available to provide maximum performance for latency sensitive applications or applications that may do significant random read operations. Fibre Channel drives are offered in various capacities and rotational speeds. High-capacity SATA II drives can be seamlessly included in Symmetrix VMAX arrays for consolidating backup to disk and lower tiered applications.The immense capacities of the Symmetrix VMAX make the platform the ideal choice for storage consolidation. The measurement for successful consolidation is for resource availability for the business-critical applications and for simplified storage tier management of secondary and tertiary application dataThe Symmetrix VMAX is the only platform built for true tiered storage. By using physical disk groupings, customers can segregate their production data across different physical disk drives; from tier 0 Enterprise Flash Drives to tier 3 1 TB SATA drives (or something in between) the Symmetrix VMAX provides tremendous flexibility with physical storage options. Also included with the Symmetrix VMAX are the capabilities that allow users to set priorities to application data at the physical drive level. Assigning physical disk priorities allows finer granularity to the physical grouping used to separate tiers. Symmetrix Priority Controls offers the ability to manage multiple application workloads by preferentially allocating disk resources to higher-tier applications during times of disk contention. This helps organizations maintain multiple workloads cost-effectively within one consolidated storage unit and thereby satisfy tiered storage objectives.True enterprise tiered storage goes beyond just the initial data classifications for basic tiering. Over time, data classifications change; tier 1 data today might change to a lesser tier tomorrow and the faster devices could now be used for a new application or different sets of data. It is not just about having different drive sizes and speeds, it's about being able to move data across different device types configured with different protection schemes all while keeping production data online. Symmetrix VMAX systems provide the capabilities to move data across drive types and protection schemes through the use of Enhanced Virtual LUN. Now a user can relocate the data from a RAID 1 device on a 146 15k rpm device to a RAID 5 1 TB SATA device operating at 7200 rpm, all without any disruption in service. Tiering in a Symmetrix VMAX system does not stop at the physical disk level. With all I/O going through cache, Symmetrix VMAX cache resources can become constrained. Dynamic cache partitioning allows users to segregate cache resources giving more cache slots to tier 1 applications and keeping lesser tiers down to a much smaller amount of cache as to not adversely affect the high priority production workloads. For example, dynamic cache partitioning can isolate the workload of a TimeFinder/Clone session so the internal copy operation has no impact to any of the other production applications or workloads sharing the Symmetrix system.

Table 2 Symmetrix VMAX with Enginuity 5874 enhancements (page 2 of 3)

Feature Description

EMC Symmetrix VMAX Series with Enginuity 57

Page 58: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

58

EMC Foundation Products

Replication Organizations continue to ask EMC for solutions to provide disaster recovery with less data loss over greater distances while using less physical capacity. In the DMX-4 with Enginuity 5873 code, EMC introduced a replication technique for Cascaded SRDF that supported a three-site disaster recovery configuration. The core benefit behind a "cascaded" configuration was its inherent ability to continue replication with minimal user intervention from the secondary site to a tertiary site with SRDF/A in the event that the primary site went down. This enabled a faster recovery at the tertiary site, provided that is where the customer is looking to restart the operation. Introduced with Symmetrix VMAX is the SRDF/Extended Distance Protection (SRDF/EDP), which is a new two-site disaster restart solution that gives customers the ability to achieve minimum data loss at an out-of-region site at a much lower cost. Using Cascaded SRDF as the foundation for this solution, combined with the use of the new "diskless" device (an R21) at an intermediate bunker site, Symmetrix VMAX systems provide data pass-through to the out-of-region site using SRDF/A.As EMC works to meet the ever-changing requirements to improve the Disaster Recovery options to reduce data loss over extended distances, efforts are also made to keep data consistent in the event of a link failure or when devices are added to the replication configuration.To minimize the impact of network issues EMC introduced the Transmit Idle capabilities for the SRDF/Asynchronous mode of replication. It allows SRDF/A to remain in a consistent operational state during periods of temporary peak workloads, periods of network congestion, or even temporary network outages. It allows the SRDF group to enter the Transmit Idle state following the expiration of the Link Limbo timer. In a Transmit Idle state, the remote SRDF mirror will remain ready on the link for as long as there are sufficient cache resources available for SRDF/A to continue to operate. This means the data will not be sent to the remote device during a brief outage but SRDF/A will remain active, allowing for automatic recovery with no user intervention.When the amount of cache needed for SRDF/A in support of the application resource requirements exceeds the amount of available memory, all SRDF/A sessions drop and remote replication is lost until conditions improve and SRDF/A can be manually restarted.The SRDF/A Delta Set Extension (DSE) feature prevents the SRDF/A session from dropping by allowing the system to offload data that would normally stay in cache to preconfigured DSE volumes. It is not meant to replace proper SRDF/A cache and bandwidth sizing or fixed unbalanced configurations. SRDF/A DSE devices are configured as a pool of storage to offload data in cache when cache is overrun during a link failure. Both the Transmit Idle and Delta Set Extension are features that were introduced in the previous Enginuity release. The latest version of Enginuity on the Symmetrix VMAX has introduced the concept of being able to add or remove devices to and from active SRDF/A sessions while keeping the recovery data consistent at the remote site. The feature is called Consistency Exempt. Consistency Exempt allows devices to be excluded from the Dependent Write Consistency calculation during certain operation. Its attributes are maintained on a SRDF mirror level basis. This reduces the exposure while managing application level devices in asynchronous replication environments and simplifies storage management of SRDF/A sessions.

Management All of the enhancements made within the Symmetrix VMAX can be managed and monitored using either the Solutions Enabler Command Line Interface (SYMCLI) or by using the Symmetrix Management Console (SMC) Graphical User Interface. Both Solutions Enabler and SMC require version 7.0 for administering the Symmetrix VMAX. SMC 7.0 updates focused on ease of use with the objective to simplify management tasks by introducing Wizards and Templates that streamline the device selection process and by using templates that group together related resources and tasks and manage them together by a single wizard.

Table 2 Symmetrix VMAX with Enginuity 5874 enhancements (page 3 of 3)

Feature Description

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 59: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

Summary The Symmetrix VMAX with the Enginuity Operating Environment provides the building block that enables storage systems to keep pace with the requirements of high-performance enterprise and commercial computing data centers. The combination of the performance advantages provided by the distributed matrix architecture, a terabyte of raw global memory, a high-speed interconnect communication matrix, and robust director design for both host and disk directors provides the greatest sustainable performance of any storage system available.

It is the platform for leading-edge performance and capacity that presents proven technology for storage consolidation and offers incomparable tools for optimized tiering as well as present industry-leading business continuity applications. All of this comes along with the high-quality support and reliability features that EMC customers have come to expect. Symmetrix VMAX is the world-class storage system that is the ideal choice for enterprise and commercial data centers that are supporting high-priority business applications.

EMC Symmetrix VMAX Series with Enginuity 59

Page 60: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

60

EMC Foundation Products

Symmetrix DMX hardware and EMC Enginuity featuresSymmetrix DMX hardware architecture and the EMC Enginuity operating environment are the foundation for the Symmetrix DMX storage platform. This environment consists of the following components:

◆ Symmetrix DMX hardware

◆ Enginuity-based operating functions

◆ Symmetrix application program interface (API) for mainframe

◆ Symmetrix-based applications

◆ Host-based Symmetrix applications

◆ Independent software vendor (ISV) applications

Figure 8 shows the relationship between these software layers and the Symmetrix hardware.

Figure 8 Symmetrix hardware and software layers

Symmetrix Hardware

Symmetrix Application Program Interface (API) for Mainframe

Symmetrix-Based Applications

Host-Based Symmetrix Applications

Independent Software Vendor Applications

Enginuity Operating Environment Functions

ICO-IMG-000199

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 61: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

EMC Enginuity operating environmentEMC Enginuity is the operating environment for all Symmetrix storage systems. Enginuity manages and ensures the optimal flow and integrity of data through the different hardware components. It also manages Symmetrix operations associated with monitoring and optimizing internal data flow. This ensures the fastest response to the user's requests for information, along with protecting and replicating data. Enginuity provides the following services:

◆ Manages system resources to intelligently optimize performance across a wide range of I/O requirements.

◆ Ensures system availability through advanced fault monitoring, detection, and correction capabilities and provides concurrent maintenance and serviceability features.

◆ Offers the foundation for specific software features available through EMC disaster recovery, business continuance, and storage management software.

◆ Provides functional services for both Symmetrix-based functionality and for a large suite of EMC storage application software.

◆ Defines priority of each task, including basic system maintenance, I/O processing, and application processing.

◆ Provides uniform access through APIs for internal calls, and provides an external interface to allow integration with other software providers and ISVs.

Figure 9 on page 62 illustrates the point-to-point architecture and interconnection of the major components in the Symmetrix DMX storage system.

Symmetrix DMX hardware and EMC Enginuity features 61

Page 62: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

62

EMC Foundation Products

Figure 9 Symmetrix DMX logical diagram

Bat

tery

Bac

kup

Uni

t Mod

ules

Coo

ling

ES

CO

N D

irec

tor

ES

CO

N H

ost

Att

ach

Dir

ect

Mat

rix

Dir

ect

Mat

rix

FC

Dir

ecto

r

FC

Ho

st A

ttac

h

Dir

ect

Mat

rix

Dir

ect

Mat

rix

Mu

ltiP

roto

col C

D

Dir

ect

Mat

rix

Dir

ect

Mat

rix

Mu

ltiP

roto

col C

D

FIC

ON

, Gig

E, i

SC

SI

Dir

ect

Mat

rix

Dir

ect

Mat

rix

Mu

ltiP

roto

col C

D

FIC

ON

, Gig

E, i

SC

SI

Dir

ect

Mat

rix

Dir

ect

Mat

rix

Mu

ltiP

roto

col C

D

FIC

ON

, Gig

E, i

SC

SI

Dir

ect

Mat

rix

Dir

ect

Mat

rix

FC

Dir

ecto

r

FC

Ho

st A

ttac

h

Dir

ect

Mat

rix

Dir

ect

Mat

rix

ES

CO

N D

irec

tor

ES

CO

N H

ost

Att

ach

Dir

ect

Mat

rix

Dir

ect

Mat

rix

Cn

tl

FC

(B

ack-

En

d)

Sym

met

rix

FC

Dis

k D

evic

es

Dir

ect

Mat

rix

Dir

ect

Mat

rix

Cn

tl

FC

(B

ack-

En

d)

Sym

met

rix

FC

Dis

k D

evic

es

Dir

ect

Mat

rix

Dir

ect

Mat

rix

Cn

tl

FC

(B

E o

r F

E)

Sym

met

rix

FC

Dis

k D

evic

es *

Dir

ect

Mat

rix

Dir

ect

Mat

rix

Cn

tl

FC

(B

E o

r F

E)

Sym

met

rix

FC

Dis

k D

evic

es *

Dir

ect

Mat

rix

Dir

ect

Mat

rix

Cn

tl

FC

(B

E o

r F

E)

Sym

met

rix

FC

Dis

k D

evic

es *

Dir

ect

Mat

rix

Dir

ect

Mat

rix

Cn

tl

FC

(B

E o

r F

E)

Sym

met

rix

FC

Dis

k D

evic

es *

Dir

ect

Mat

rix

Dir

ect

Mat

rix

Cn

tl

FC

(B

ack-

En

d)

Sym

met

rix

FC

Dis

k D

evic

es

Dir

ect

Mat

rix

Dir

ect

Mat

rix

Cn

tl

FC

(B

ack-

En

d)

Sym

met

rix

FC

Dis

k D

evic

es

Dir

ect

Mat

rix

Dir

ect

Mat

rix

Cn

tl

Cn

tlC

ntl

Cn

tlC

ntl

Cn

tlC

ntl

Cn

tl

Co

ntr

ol a

nd

Co

mm

un

icat

ion

s S

ign

als

Co

ntr

ol a

nd

Co

mm

un

icat

ion

s S

ign

als

32 G

BG

loba

lM

emor

y

32 G

BG

loba

lM

emor

y

32 G

BG

loba

lM

emor

y

32 G

BG

loba

l M

emor

y

32 G

BG

loba

l M

emor

y

32 G

BG

loba

lM

emor

y

32 G

BG

loba

lM

emor

y

32 G

BG

loba

l M

emor

y

Env

ironm

enta

lC

ontr

ol a

nd

Sta

tus

Sig

nals

(XC

M)

*No

te:

Th

e D

MX

-3 s

yste

m s

up

po

rts

Fib

re C

han

nel

, FIC

ON

, ES

CO

N, a

nd

iSC

SI c

on

nec

tio

ns

as w

ell a

s G

igE

, Fib

re C

han

nel

, an

d E

SC

ON

rem

ote

co

nn

ecti

on

s.

T

he

DM

X-3

sys

tem

mid

pla

ne

has

fou

r sl

ots

th

at s

up

po

rt e

ith

er f

ron

t-en

d c

han

nel

dir

ecto

rs o

r b

ack-

end

dis

k d

irec

tors

.

Env

ironm

enta

lC

ontr

ol a

nd

Sta

tus

Sig

nals

(XC

M)

FIC

ON

, Gig

E, i

SC

SI

Pow

er S

uppl

ies

Ser

vice

Pro

cess

or(K

VM

& S

erve

r) U

PS

Mod

em

ICO

-IM

G-0

0010

3

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 63: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

Symmetrix DMXThis section discusses supported Symmetrix DMX features for mainframe environments.

I/O support featuresParallel Access Volume (PAV) — Parallel Access Volumes were implemented within z/OS allowing one I/O to take place for each base unit control block (UCB), and one for each statically or dynamically assigned alias UCB. These alias UCBs allow parallel I/O access for volumes. Current Enginuity releases provide support for both static and dynamic PAVs. Dynamic PAVs allow fewer aliases to be defined within a logical control unit. With dynamic PAVs, aliases are applied to the base UCBs (devices) that need them the most. This enables Workload Manager (in goal mode) to dynamically assign an alias to a device.

Multiple Allegiance (MA) — While PAVs facilitate multiple parallel accesses to the same device from a single LPAR, Multiple Allegiance (MA) allows multiple parallel nonconflicting accesses to the same device from multiple LPARs. Multiple Allegiance I/O executes concurrently with PAV I/O. The Symmetrix storage system treats them equally and guarantees data integrity by serializing write I/Os where extent conflicts exist.

Host connectivity options — Mainframe host connectivity is supported through serial ESCON and FICON channels. Symmetrix storage systems appear to mainframe operating systems as any of the following control units: IBM 3990, IBM 2105, and IBM 2107. The physical storage devices can appear to the mainframe operating system as any mixture of different sized 3380 and 3390 devices.

ESCON support — Enterprise Systems Connection (ESCON) is a fiber optic connection technology that interconnects mainframe computers, workstations and network-attached storage devices across a single channel and supports half duplex data transfers. ESCON can be used for handling Symmetrix Remote Data Facility (SRDF) remote links.

FICON support — Fiber Connection (FICON) is a fiber optic channel technology that extends the capabilities of its previous fiber optic channel standard, ESCON. Unlike ESCON, FICON supports full duplex data transfers and enables greater throughput rates over longer distances. FICON uses a mapping layer based on technology developed for Fibre Channel and multiplexing technology, which

Symmetrix DMX hardware and EMC Enginuity features 63

Page 64: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

64

EMC Foundation Products

allows small data transfers to be transmitted at the same time as larger ones. Symmetrix DMX systems with Enginuity release level 5670 and above support FICON ports.

Fibre Channel support — Fibre Channel is a supported option in SRDF environments.

GigE support — GigE is a supported option in SRDF environments. Symmetrix GigE directors in an SRDF environment provide direct TCP/IP connectivity end-to-end for remote replication solutions over extended distances. This negates the costly FC to IP converters and helps utilize the existing IP infrastructures without major disruptions.

Data protection optionsSymmetrix storage systems incorporate many standard features that provide a higher level of data availability than conventional Direct Access Storage Device (DASD). These options ensure a greater level of data recoverability and availability. They are configurable at the logical volume level so different protection schemes can be applied to different classes of data within the same Symmetrix storage system on the same physical device. Customers choose data protection options, such as the following, to match their data requirements:

◆ Mirroring (RAID 1) or RAID 10

◆ RAID 6 (6+2) and RAID 6 (14+2)

◆ RAID 5 (3+1) and RAID 5 (7+1)

◆ Symmetrix Remote Data Facility (SRDF)

◆ TimeFinder

◆ Dynamic Sparing

◆ Global Sparing

Other featuresOther IBM-supported compatibility features include:

◆ Channel Command Emulation for IBM ESS 2105/2107

◆ Concurrent Copy

◆ Peer to Peer Remote Copy (PPRC)

◆ Extended Remote Copy (XRC)

◆ Dynamic Channel Path Management (DCM)

◆ Dynamic Path Reconnection (DPR) Support

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 65: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

◆ Host Data Compression

◆ Logical Path and Control Unit Address Support (CUADD)

◆ Multi-System Imaging

◆ Mainframe systems hypervolumes

◆ Partitioned Dataset (PDS) Search Assist

◆ Parallel Access Volumes (PAVs)

◆ Dynamic Parallel Access Volumes (DPAVS)

Symmetrix DMX hardware and EMC Enginuity features 65

Page 66: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

66

EMC Foundation Products

ResourcePak Base for z/OSEMC ResourcePak Base for z/OS is a software facility that makes communication between mainframe-based applications (provided by EMC or ISVs) and a Symmetrix storage system more efficient. ResourcePak Base is designed to improve performance and ease of use of mainframe-based Symmetrix applications.

ResourcePak Base delivers EMC Symmetrix Control Facility (EMCSCF) for IBM and IBM-compatible mainframes. EMCSCF provides a uniform interface for EMC and ISV software products, where all products are using the same interface at the same function level. EMCSCF delivers a “persistent address space” on the host that facilitates communication between the host and the Symmetrix as well as other EMC-delivered, and partner-delivered applications.

Figure 10 z/OS SymmAPI architecture

ResourcePak Base is the delivery mechanism for EMC Symmetrix Applications Programming Interface for z/OS (SymmAPI™-MF). ResourcePak Base provides a central point of control by giving software a persistent address space on the mainframe for SymmAPI-MF functions that perform tasks such as the following:

◆ Maintaining an active repository of information about EMC Symmetrix devices attached to z/OS environments and making that information available to other EMC products.

◆ Performing automation functions.

Symmetrixdevices

ICO-IMG-000104

Program Calls

SymmetrixControl Facility

(ResourcePak Base)

IOS

EMCSAISNAPAPIAutomation: SWAPMetadata: Config info Device status Event monitor

(e. g., TimeFinder, SRDFHost Component)

EMC or ISV developedproducts

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 67: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

◆ Handling inter-LPAR (logical partition) communication through the Symmetrix storage system.

ResourcePak Base provides faster delivery of new Symmetrix functions by EMC and ISV partners, along with easier upgrades. It also provides the ability to gather more meaningful data when using tools such as TimeFinder/Mirror Query because device status information is now cached along with other important information.

ResourcePak Base for z/OS is a prerequisite for EMC mainframe applications like the TimeFinder/Clone Mainframe Snap Facility or SRDF Host Component for z/OS, and is included with these products.

Features

ResourcePak Base provides the following functionality with EMCSCF:

◆ Cross-system communication◆ Non-disruptive SymmAPI-MF refreshes◆ Save Device Monitor◆ SRDF/A Monitor◆ Group Name Service (GNS) support◆ Pool management◆ SRDF/AR resiliency◆ SRDF/A multisession consistency◆ SWAP services◆ Recovery services◆ Licensed feature code management

Cross-system communication Inter-LPAR communication is handled by the EMCSCF cross-system communication (CSC) component. CSC uses a Symmetrix storage system to facilitate communications between LPARs. Several EMC Symmetrix mainframe applications use CSC to handle inter-LPAR communications.

Non-disruptive SymmAPI-MF refreshesAs of version 5.3, EMCSCF allows the SymmAPI-MF to be refreshed non-disruptively. Refreshing SymmAPI-MF does not impact currently executing applications that use SymmAPI-MF; for example, SRDF Host Component for z/OS or TimeFinder/Clone Mainframe Snap Facility.

ResourcePak Base for z/OS 67

Page 68: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

68

EMC Foundation Products

Save Device MonitorThe Save Device Monitor periodically examines the consumed capacity of the device pool (SNAPPOOL) used by TimeFinder/Snap with the VDEV licensed feature code enabled. The Save Device Monitor also checks the capacity of the device pool (DSEPOOL) used by SRDF/A.

The Save Device Monitor function of EMCSCF provides a way to:

◆ Automatically check space consumption thresholds.

◆ Trigger an automated response that is tailored to the specific needs of the installation.

SRDF/A MonitorThe SRDF/A Monitor in ResourcePak Base is designed to:

◆ Find EMC Symmetrix controllers that are running SRDF/A.

◆ Collect and write SMF data about those controllers.

After ResourcePak Base is installed, the SRDF/A Monitor is started as a subtask of EMCSCF.

Group Name Service supportResourcePak Base includes support for Symmetrix Group Name Service (GNS). Using GNS, you can define a device group once and then use that single definition across multiple EMC products on multiple platforms. This means that you can use a device group defined through GNS with both mainframe and open systems-based EMC applications. GNS also allows you to define group names for volumes that can then be operated upon by various other commands.

Pool managementWith ResourcePak Base V5.7 or higher, generalized device pool management is a provided service. Pool devices are a predefined set of devices that provide a pool of physical space. Pool devices are not host-accessible. The CONFIGPOOL commands allow management of SNAPPOOLs or DSEPOOLs with CONFIGPOOL batch statements.

SRDF/AR resiliencySRDF/AR can recover from internal failures without manual intervention. Device replacement pools for SRDF/AR (or SARPOOLs) are provided to prevent SRDF/AR from halting due to device failure. In effect, SARPOOLs are simply a group of devices that are unused until SRDF/AR needs one of them.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 69: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

SRDF/A Multi-Session ConsistencySRDF/A Multi-Session Consistency (MSC) is a task in EMCSCF that ensures remote R2 consistency across multiple Symmetrix storage systems running SRDF/A. MSC provides the following:

◆ Coordination of SRDF/A cycle switches across systems.

◆ Up to 24 SRDF groups in a multi-session group.

◆ One SRDF/A session and one SRDF/A group per Symmetrix storage system when using Enginuity release level 5X70.

◆ With Enginuity release level 5x71 and later, SRDF/A groups are dynamic and are not limited to one per Symmetrix storage system. Group commands of ENABLE, DISPLAY, DISABLE, REFRESH, and RESTART are now available.

SWAP servicesResourcePak Base deploys a SWAP service in EMCSCF. It is used by EMC AutoSwap™ for planned outages with the ConGroup Continuous Availability Extensions (CAX).

Recovery servicesRecovery service commands allow you to perform recovery on local or remote devices (if the links are available for the remote devices).

Licensed feature code managementEMCSCF manages licensed feature codes (LFCs) to enable separately chargeable features in EMC software. These features require an LFC to be provided during the installation and customization of EMCSCF. LFCs are available for:

◆ Symmetrix Priority Control◆ Dynamic Cache Partitioning◆ AutoSwap (ConGroup with AutoSwap Extensions) – separate

LFCs are required for planned and unplanned swaps◆ EMC Compatible Flash (Host Software Emulation)◆ EMC z/OS Storage Manager◆ SRDF/Asynchronous (MSC)◆ SRDF/Automated Replication◆ SRDF/Star◆ TimeFinder/Clone (TARGET)◆ TimeFinder/Consistency Group (CONSISTENT)◆ TimeFinder/Snap (VDEV)

ResourcePak Base for z/OS 69

Page 70: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

70

EMC Foundation Products

SRDF family of products for z/OSAt the conceptual level, SRDF is mirroring (RAID level 1) one logical disk device (the primary (source/R1 within a primary Symmetrix storage system) to a second logical device (the secondary target/R2, in a physically separate secondary Symmetrix storage system) over ESCON, Fibre Channel, or Gig-E high-speed communication links. The distance separating the two Symmetrix storage systems can vary from a few feet to thousands of miles. SRDF is the first software product for the Symmetrix storage system. Its basic premise is that a remote mirror of data (data in a different Symmetrix storage system) can serve as a valuable resource for:

◆ Protecting data using geographical separation.

◆ Giving applications a second location from which to retrieve data should the primary location become unavailable for any reason.

◆ Providing a means to establish a set of volumes on which to conduct parallel operations, such as testing or modeling.

SRDF has evolved to provide different operation modes (synchronous, semi-synchronous, adaptive copy — write pending mode, adaptive copy — disk mode, data mobility, and most recently, asynchronous mode). More advanced solutions have been built upon it such as SRDF/Automated Replication and SRDF/Star.

Constant throughout these evolutionary stages has been control of the SRDF family products by the mainframe-based application called SRDF Host Component. SRDF Host Component is a control mechanism through which all SRDF functionality is made available to the mainframe user. EMC Consistency Group for z/OS is another useful tool for managing dependent-write consistency across inter-Symmetrix links with one or more mainframes attached.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 71: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

Figure 11 SRDF family for z/OS

Figure 11 indicates that the modules on the right plug into one of the modules in the center as an add-on function. For example, SRDF Consistency Group is a natural addition to customers running SRDF in Synchronous mode.

SRDF Host Component for z/OS

SRDF Host Component for z/OS along with ResourcePak Base for z/OS (API services module) is delivered when ordering a member of the SRDF product family.

Note: For more information about SRDF technology in general, go to: http://www.emc.com/products/family/srdf-family.htm.

SRDF mainframe featuresSRDF mainframe features include the following:

◆ Ability to deploy SRDF solutions across the enterprise: SRDF Host Component can manage data mirroring whether that data is in CKD or FBA format. In these deployments, both a mainframe and one or more open system hosts are attached to the primary

SR

DF

Fam

ily fo

r z/

OS

ICO-IMG-000101

SRDF/SSRDF/Star

SRDF/CG

SRDF/AR

Synchronous forzero data exposure

SRDF/AAsynchronous for

extended distances

SRDF/DMEfficient Symmetrix-to-Symmetrix data mobility

Multi-pointreplication option

Consistency Groupoption

Automated Replicationoption

SRDF family of products for z/OS 71

Page 72: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

72

EMC Foundation Products

side of the SRDF relationship. Deploying SRDF across an enterprise is not unique for mainframe; it is just implemented through different tools.

◆ Support for either ESCON or FICON host channels regardless of the SRDF link protocol employed: SRDF is a Symmetrix-to-Symmetrix protocol that mirrors data on both sides of a communications link. Host connectivity to the Symmetrix storage system (ESCON vs. FICON) is inconsequential to the protocols used in moving data between the SRDF links. Mainframe supports all the standard link protocols: ESCON, Extended ESCON, Fibre Channel, and GigE.

◆ Software support for taking an SRDF link offline: SRDF Host Component has a software command that can take an SRDF link offline independent of whether it is taking the target volume offline. This feature is useful if there are multiple links in the configuration and only one is experiencing issues, for example too many bounces (sporadic link loss) or error conditions. In this case it is unnecessary to take all links offline, when taking the one in question offline is sufficient.

◆ SRDF Host Component additional interfaces: Besides the console interface, all features of SRDF Host Component can be employed using REXX scripting and/or the Stored Procedure Executive (SPE), a powerful tool for automating repeated processes.

Concurrent SRDF and SRDF/StarSRDF/Star is built upon several key technologies:

◆ Dynamic SRDF, Concurrent SRDF

◆ ResourcePak Base for z/OS

◆ SRDF/Synchronous

◆ SRDF/Asynchronous

◆ Consistency Group

◆ Certain features within Enginuity

SRDF/Star provides advanced multisite business continuity protection which augments Concurrent SRDF/S and SRDF/A operations from the same primary volumes with the ability to incrementally establish an SRDF/A session between the two remote sites in the event of a primary site outage. This capability is only available through SRDF/Star software.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 73: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

SRDF/Star is a combination of mainframe host software and Enginuity functionality that operates concurrently.

Figure 12 Classic SRDF/Star support configuration

The concurrent configuration option of SRDF/A provides the ability to restart an environment at long distances with minimal data loss, while simultaneously providing a zero data loss restart capability at a local site. Such a configuration provides protection for both a site disaster and a regional disaster, while minimizing performance impact and loss of data.

In a Concurrent SRDF/A configuration without SRDF/Star functionality, the loss of the primary A site would normally mean that the long distance replication would stop and data would no longer propagate to the C site. Data at site C would continue to age as production was resumed at site B. Resuming SRDF/A between sites B and C would require a full resynchronization to re-enable disaster recovery protection. This consumes both time and resources.

SRDF/Star provides a rapid re-establishment of cross-site protection in the event of a primary site (A) failure. Rather than a full resynchronization between sites B and C, SRDF/Star provides a differential B-C synchronization, dramatically reducing the time to remotely protect the new production site. SRDF/Star also provides a mechanism for the user to determine which site (B or C) has the most

R1 R2

BCV

R2

Primary site (A)(production) Local site (B)

Remote site (C)

SRDF/Synchronous

SRDF/Asynchronous

ActiveInactive

ICO-IMG-000105

SRDF family of products for z/OS 73

Page 74: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

74

EMC Foundation Products

current data in the event of a rolling disaster affecting site A. In all cases, the choice of which site to use in a failure is left to the customer’s discretion.

Cascaded SRDFCascaded SRDF, supported in Enginuity 5773 and greater, supports a three-site disaster recovery configuration. The core benefit behind a “cascaded” configuration is its inherent capability to continue replicating with minimal user intervention from the secondary site to a tertiary site with SRDF/A in the event that the primary site goes down. This enables a faster recovery at the tertiary site, provided that is where the customer is looking to restart their operation.

Prior to Enginuity 5773, an SRDF device could be a primary device (R1 device) or a secondary device (R2 device); however it could not be in the dual role simultaneously. Cascaded SRDF is a new three site disaster recovery configuration where data from a primary site is synchronously replicated to a secondary site, and then asynchronously replicated to a tertiary site.

Cascaded SRDF introduces a new SRDF R21 device. The R21 device will assume dual roles of primary (R1) and secondary (R2) device types simultaneously. Data received by this device as a secondary can automatically be transferred by this device as a primary (according to the possible modes).

Multi-Session ConsistencyIn SRDF/A environments, consistency across multiple Symmetrix systems for SRDF/A sessions is provided by the Multi-Session Consistency (MSC) task that executes in the EMCSCF address space. MSC provides consistency across as many as 24 SRDF/A sessions and is enabled by a Licensed Feature Code.

SRDF/ARSRDF/Automated Replication (SRDF/AR) is an automation solution that uses both SRDF and TimeFinder to provide periodic, asynchronous replication of a restartable data image. In a single-hop SRDF/AR configuration, the magnitude of controlled data loss could depend on the cycle time chosen. However, if greater protection is

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 75: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

required, a multi-hop SRDF/AR configuration can provide long distance disaster restart with zero data loss at a middle or “bunker” site.

EMC Geographically Dispersed Disaster Restart (EMC GDDR)

EMC Geographically Dispersed Disaster Restart (GDDR) is a mainframe software product that automates business recovery following both planned outages and disasters, including the total loss of a data center. EMC GDDR achieves this goal by providing monitoring, automation and quality controls for many EMC and third-party hardware and software products required for business restart.

Because EMC GDDR restarts production systems following disasters, it does not reside on the same LPAR that it protects. EMC GDDR resides on separate logical partition (LPAR) from the host system that is running application workloads.

In a three-site SRDF/Star with AutoSwap configuration, EMC GDDR is installed on a control LPAR at each site. Each EMC GDDR node is aware of the other two EMC GDDR nodes through network connections between each site. This awareness allows EMC GDDR to:

◆ Detect disasters

◆ Identify survivors

◆ Nominate the leader

◆ Recover business at one of the surviving sites

To achieve the task of business restart, EMC GDDR automation extends well beyond the disk level (on which EMC has traditionally focused) and into the host operating system. It is at this level that sufficient controls and access to third party software and hardware products exist, enabling EMC to provide automated recovery services.

EMC GDDR’s main activities include:

◆ Managing planned site swaps (workload and DASD) between the primary and secondary sites and recovering the SRDF/Star with AutoSwap environment.

◆ Managing planned site swaps (DASD only) between the primary and secondary sites and recovering the SRDF/Star with AutoSwap environment.

SRDF family of products for z/OS 75

Page 76: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

76

EMC Foundation Products

◆ Managing the recovery of the SRDF environment and restarting SRDF/A in the event of an unplanned site swap.

◆ Active monitoring of the managed environment and responding to exception conditions.

EMC Consistency Group for z/OS

An SRDF consistency group is a collection of devices logically grouped together to provide consistency. Its purpose is to maintain data integrity for applications that are remotely mirrored, particularly those that span multiple RA groups or multiple Symmetrix storage systems. The protected applications may be comprised of multiple heterogeneous data resource managers spread across multiple host operating systems. It is possible to span mainframe LPARs, UNIX, and Windows Servers. These heterogeneous platforms are referred to as hosts.

The Dependent-Write Principle is the logical dependency between write I/Os that are embedded by the logic of an application, operating system, or data base management system (DBMS). A write I/O will not be issued by an application until a prior, related write I/O has completed a logical dependency (not a time dependency).

◆ Inherent in all DBMS

◆ Page (data) write is dependent-write I/O based on a successful log write

◆ Applications can also use this technology

◆ Power failures create a dependent-write consistent image

◆ Restart transforms dependent-write consistent to transactional consistent

An SRDF consistency group has two implementations to preserve a dependent-write consistent image while providing a synchronous disaster restart solution with a zero data loss scenario. Disaster restart solutions that use consistency groups provide remote restart with short recovery time objectives. Zero data loss implies that all completed transactions at the beginning of a disaster will be available at the target storage system after restart.

The first implementation to preserve a dependent-write consistent image uses IOS on mainframe hosts, and PowerPath on open systems hosts. This method requires all hosts have connectivity to all involved Symmetrix storage systems, either through direct connections or

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 77: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

indirectly through one or more SAN configurations. These hosts are not required to have the logical devices visible; a path (gatekeeper) to each involved Symmetrix storage system is sufficient. The consistency group definition, software, and licenses must reside on all hosts involved in the consistency group. The read and write I/Os are both held with IOS and PowerPath implementation.

The second (and preferred) implementation to preserve a dependent-write consistent image is SRDF Enginuity Consistency Assist (SRDF-ECA), This method requires a minimum of one host having connectivity to all involved Symmetrix storage systems, either through direct connections or indirectly through one or more SAN configurations. EMC recommends having at least two such hosts for redundancy purposes. In the event of a host failure, the second host can automatically take over control of the consistency functions. These hosts are referred to as control hosts, and are the only hosts required to have the consistency group definition, software, and licenses. SRDF-ECA defers write I/Os to all involved logical volumes in the consistency group; subsequent read I/Os are held per logical volume once the first write I/O is deferred. This is done only for a short period of time while the consistency group suspends operations to the secondary volumes.

EMC recommends that SRDF-ECA mode be configured when using a consistency group in a mixed mainframe and open systems environment with both CKD and FBA (fixed block architecture) devices.

When the amount of data for an application becomes very large, the time and resources required for host-based software to protect, back up, or execute decision-support queries on these data bases becomes critical. In addition, the time required to shut down those applications for offline backup is no longer feasible, and alternative implementations are required. One alternative is SRDF consistency group technology which allows users to remotely mirror the largest data environments and automatically split off dependent-write consistent, restartable copies of applications in seconds without interruption to online services.

A consistency group is a group of SRDF volumes (primary or secondary) that act in unison to maintain the integrity of applications distributed across multiple Symmetrix storage systems, or multiple RA groups within a single Symmetrix storage system. If a primary volume in the consistency group cannot propagate data to its corresponding secondary volume, EMC software suspends data

SRDF family of products for z/OS 77

Page 78: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

78

EMC Foundation Products

propagation from all primary volumes in the consistency group. The suspension halts all data flow to the secondary volumes and ensures a dependent-write consistent secondary volume copy of the data base up to the point-in-time that the consistency group tripped.

Tripping a consistency group can occur either automatically or manually. Scenarios in which an automatic trip would occur include:

◆ One or more primary volumes cannot propagate writes to their corresponding secondary volumes.

◆ The remote device fails.

◆ The SRDF directors on either the primary or secondary Symmetrix storage systems fail.

In an automatic trip, the Symmetrix storage system completes the write to the primary volume, but indicates that the write did not propagate to the secondary volume. EMC software, combined with Symmetrix Enginuity, intercepts the I/O and instructs the Symmetrix storage system to suspend all primary volumes in the consistency group from propagating any further writes to the secondary volumes. Once the suspension is complete, writes to all primary volumes in the consistency group continue normally, but are not propagated to the target side until normal SRDF mirroring resumes.

An explicit trip occurs when a susp-cgrp (suspend congroup) command is invoked using SRDF Host Component software. Suspending the consistency group creates an on-demand, restartable copy of the data base at the secondary site. BCV devices synchronized with the secondary volumes are then split after the consistency group is tripped, creating a second dependent-write consistent copy of the data. During the explicit trip, SRDF Host Component issues the command to create the dependent-write consistent copy, but may require assistance from either IOSLEVEL or RDF-ECA via ConGroup software if I/O is received on one or more of the primary volumes, or if the Symmetrix commands issued are abnormally terminated before the explicit trip.

An EMC consistency group maintains consistency within applications spread across multiple Symmetrix storage systems in an SRDF configuration by monitoring data propagation from the primary volumes in a consistency group to their corresponding secondary volumes. Consistency groups provide data integrity protection during a rolling disaster. The loss of an SRDF communication link is an example of an event that could cause a rolling disaster.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 79: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

Figure 13 depicts a dependent-write I/O sequence where a predecessor log write happens before a page flush from a data base buffer pool. The log device and data device are on different Symmetrix storage systems with different replication paths. Figure 13 also demonstrates how rolling disasters are prevented using EMC consistency group technology.

Figure 13 SRDF consistency group using RDF-ECA

1. A consistency group protection is defined containing volumes X, Y, and Z on the primary Symmetrix storage system. This consistency group definition must contain all the volumes that need to maintain dependent-write consistency and reside on all participating hosts involved in issuing I/O to these volumes. A mix of CKD (mainframe) and FBA (UNIX/Windows) devices can be logically grouped together. In some cases, the entire processing environment may be defined in a consistency group to ensure dependent-write consistency.

2. The previously described rolling disaster begins.

3. The predecessor log write occurs to volume Z, but cannot be replicated to the remote site.

Host 1

DBMS

Host 2

DBMS

RDF-ECA

RDF-ECA

Consistency groupHost component

Symmetrix control Facility

Consistency groupHost component

Symmetrix control Facility

2

R1(Z)

R1(Y)

R1(X)

R2(Y)

R2(Z)

R2(X)

R1(C)

R1(B)

R1(A)

R2(B)

R2(C)

R2(A)

ICO-IMG-000106

Suspend R1/R2relationship

DBMSrestartablecopy

E-ConGroupdefinition(X,Y,Z)

X = DBMS dataY = Application dataZ = Logs

1

3

4 5

6

7

SRDF family of products for z/OS 79

Page 80: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

80

EMC Foundation Products

4. Because the predecessor log write to volume Z cannot be propagated to the remote Symmetrix storage system, a consistency group trip occurs.

5. Enginuity on the primary Symmetrix storage system captures the write I/O that initiated the trip event and defers all write I/Os to all logical volumes within the consistency group on this Symmetrix storage system. The control host software constantly polls all involved Symmetrix storage systems for such a condition.

6. After detecting a trip event, the host software sends an instruction to all involved Symmetrix storage systems in the consistency group definition to defer all write I/Os for all logical volumes in the group. This trip is not an atomic event. The process guarantees dependent-write consistency, however, because of the integrity of the dependent-write I/O principle. After a write I/O is not received as complete to the host (the predecessor log write), the data base prevents the dependent I/O from being issued.

7. Once all of the involved Symmetrix storage systems have deferred the write I/Os for all involved logical volumes of the consistency group, the host software issues a suspend action on the primary/secondary relationships for the logically grouped volumes which immediately disables all replication of those grouped volumes to the remote site. Other volumes outside of the group are allowed to continue replicating, provided the communication links are available.

8. After the relationships are suspended, the completion of the predecessor write is acknowledged back to the issuing host. Furthermore, all I/Os that were held during the consistency group trip operation are released. The dependent data write is issued by the DBMS and arrives at X but is not replicated to its secondary volume.

When a complete failure occurs from this rolling disaster, the dependent-write consistency at the secondary site is preserved. If a complete disaster does not occur and the failed links are reactivated, consistency group replication can be resumed. EMC recommends creating a copy of the dependent-write consistent image while the resume takes place. After the SRDF process reaches synchronization, the dependent-write consistent copy is achieved at the remote site.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 81: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

Restart in the event of a disaster or nondisasterTwo major circumstances require restart ability or continuity of business processes as facilitated by SRDF: a true, unexpected disaster and an abnormal termination of processes on which dataflow depends. Both circumstances require that a customer immediately deploy the proper resource and procedures to correct the situation. It is generally the case that an actual disaster is more demanding of all necessary resources in order to successfully recover/restart.

DisasterIn the event of a disaster, where the primary Symmetrix storage system is lost, it is necessary to run data base and application services from the DR site. This requires a host at the DR site. The first action is to write-enable the secondary devices. If the “device group” is not yet built on the remote host, it must be created using the secondary devices that were remote mirrors of the primary devices on the primary Symmetrix.

At this point, the host can issue the necessary commands to access the disks.

After the data is available to the remote host, the data base is restarted. The data base performs an implicit recovery when activated or on the first connection if auto restart is enabled. Transactions that were committed, but not completed, are rolled forward and completed using the information in the active logs. Transactions that have updates applied to the data base, but were not committed, are rolled back. The result is a transactionally consistent data base.

Abnormal termination (not a disaster)An SRDF session can be interrupted by any situation that prevents the flow of data from the primary site to the secondary site (for example, a software failure, network failure, or hardware failure).

SRDF/A Automated RecoverySRDF/A Automated Recovery eliminates the need for external automation or manual intervention by automatically restoring SRDF/A to operational status following a planned or unplanned outage. You can configure the software to prompt you for authorization before proceeding with automated recovery.

SRDF family of products for z/OS 81

Page 82: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

82

EMC Foundation Products

SRDF/A Automated Recovery supports Multi-Session Consistency (MSC) environments. MSC provides consistency across multiple Symmetrix systems for SRDF/A groups. MSC is enabled by a Licensed Feature Code. The minimum software pre-requisites needed to run automated recovery functionality are SRDF Host Component for z/OS V5.5 and ResourcePak Base for z/OS V5.7.

The primary MSC (with MSC WEIGHT FACTOR=0) performs the following functions:

◆ Detects that SRDF/A has dropped.

◆ Initiates the recovery automation sequence for each SRDF/A group in the MSC group. there is one independent sequence for each group.

◆ Waits for each of the recovery automation sequences to post completion to primary MSC.

◆ Performs an MSC start.

The recovery automation sequence performs the following functions:

◆ If configured, preserves a consistent image of data at the remote site using TimeFinder/Mirror, based on policy defined in the SRDF Host Component initialization parameters. The TimeFinder/Mirror clone emulation facility can also be used.

◆ Validates that a user-specified minimum number of RDF directors are online.

◆ Performs MSC cleanup.

◆ Automatically recovers SRDF/A at the RDF group level once the invalid track count has reached a user-specified level (default is 30000).

◆ Optionally, based on policy settings, re-establishes BCVs upon successful MSC restart.

The recovery automation can also be manually initiated. For example, it can be initiated following a #SC SRDFA PEND DROP command or after being deferred when the PROMPT option is specified in the SRDFA AUTO RECOVER initialization parameter.

Note: For additional information regarding this feature, please refer to the MSC SRDF/A Automated Recovery Service Notes.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 83: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

EMC AutoSwapEMC AutoSwap provides the ability to move (swap) workloads transparently from volumes in one set of Symmetrix storage systems to volumes in other Symmetrix storage systems without operational interruption. Swaps may be initiated either manually as planned events or automatically as unplanned events (upon failure detection).

◆ Planned swaps facilitate operations such as non-disruptive building maintenance, power reconfiguration, DASD relocation, and channel path connectivity reorganization.

◆ Unplanned swaps protect systems against outages in a number of scenarios. Examples include: power supply failures, building infrastructure faults, air conditioning problems, loss of channel connectivity, entire DASD system failures, operator error, or the consequences of intended or unintended fire suppression system discharge.

AutoSwap, with SRDF and EMC Consistency Group, dramatically increases data availability.

In Figure 14 on page 84, swaps are concurrently performed while application workloads continue in conjunction with EMC Consistency Group. This option protects data against unforeseen events, and ensures that swaps are unique, atomic operations that maintain dependent-write consistency.

SRDF family of products for z/OS 83

Page 84: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

84

EMC Foundation Products

Figure 14 AutoSwap before and after states

AutoSwap highlightsAutoSwap includes the following features and benefits:

◆ Testing on devices in swap groups to ensure validity of address-switching conditions. This supports grouping devices into swap groups, and treats each swap group as a single-swap entity.

◆ Consistent swapping. Writes to the group are held during swap processing, ensuring dependent-write consistency to protect data and ensure restartability.

◆ Swap coordination across multiple z/OS images in a shared DASD or parallel Sysplex environment. During the time when devices in swap groups are frozen and I/O is queued, AutoSwap reconfigures SRDF pairs to allow application I/O streams to be serviced by secondary SRDF devices. As the contents of UCBs are swapped, I/O redirection takes place transparently to the applications. This redirection persists until the next Initial Program Load (IPL) event.

SRDF/S

SRDF/CG

Before

SRDF/S

SRDF/CG

After

ICO-IMG-000107

R1 R2 R2R1

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 85: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

Use casesAutoSwap can:

◆ Perform dynamic workload reconfiguration without application downtime.

◆ Concurrently swap large numbers of devices.

◆ Handle device group operations.

◆ Relocate logical volumes.

◆ Perform consistent swaps.

◆ Implement planned outages of individual devices or entire systems.

◆ React appropriately to unforeseen disasters if an unplanned event occurs.

◆ Protect against the loss of all DASD channel paths or an entire storage system. This augments the data integrity protection provided by Consistency Groups by providing continuous availability in the event of a failure affecting the connectivity to a primary device.

SRDF family of products for z/OS 85

Page 86: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

86

EMC Foundation Products

TimeFinder family products for z/OSFor years, the TimeFinder family of products have provided important capabilities for the mainframe environment. TimeFinder was recently repackaged as shown in Figure 15, but the product’s features and functions remain the same.

Figure 15 TimeFinder family of products for z/OS

TimeFinder/Clone for z/OSTimeFinder/Clone for z/OS is documented as a component of the TimeFinder/Clone Mainframe Snap Facility. It is the code and documentation associated with making full-volume snaps and dataset-level snaps. As such, they are space-equivalent copies. TimeFinder/Clone does not consume a mirror position, nor does it exhibit the BCV flag on the Symmetrix storage system. Certain TimeFinder/Mirror commands, such as Protected BCV Establish, are unavailable in the TimeFinder/Clone Mainframe Snap Facility. This command is one of copy technology rather than mirror technology. Other protection mechanisms, such as RAID 5, are available to the target storage system as well.

Tim

eFin

der

Fam

ily fo

r z/

OS TimeFinder/Clone

TimeFinder/Snap

Ultra-functional, high-performance copies

Economical space-saving copies

TimeFinder/Mirror

TimeFinder/CG

Classic high-performance option

Consistency Groupoption

ICO-IMG-000108

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 87: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

Additional mainframe-specific capabilities of TimeFinder/Clone for z/OS include:

◆ Dataset-level snap operations.

◆ Differential snap operations. These require only the changed data to be copied on subsequent snaps.

◆ Support for CONSISTENT SNAP operations. These make the target dependent-write consistent and requires TimeFinder/Consistency Group product.

◆ Up to 16 simultaneous point-in-time copies of a single primary volume.

◆ Compatibility with STK Snapshot Copy and IBM Snap products including reuse of its SIBBATCH syntax.

◆ TimeFinder Utility for z/OS. This conditions the catalog by relabeling and recataloging entries to avoid issues associated with duplicate volume names in the mainframe environment. This utility is also delivered with TimeFinder/Mirror, TimeFinder/Clone, and TimeFinder/Snap products.

◆ Compatibility with mainframe security mechanisms such as RACF.

◆ Integration with many mainframe-specific ISVs and their respective products.

TimeFinder/Snap for z/OS

TimeFinder/Snap for z/OS uses the code and documentation from TimeFinder/Clone, but with an important difference. Snaps made with this product are virtual snaps, meaning they take only a portion of the space a full-volume snap would. Invocation of this feature is through the keyword VDEV (Virtual Device). If the VDEV argument is used, only the pre-update image of the change data plus a pointer is kept on the target. This technique considerably reduces disk space usage on the target. This feature also provides for one or more named SNAPPOOLs that can be managed independently.

Mainframe-specific features of the TimeFinder/Snap for z/OS product include:

◆ The same code and syntax as the TimeFinder/Clone Mainframe Snap Facility (plus the addition of the VDEV argument).

TimeFinder family products for z/OS 87

Page 88: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

88

EMC Foundation Products

◆ The same features and functions as TimeFinder/Clone Mainframe Snap Facility and therefore the same benefits.

◆ Logical Volume support only (no dataset support).

◆ Catalog conditioning with TimeFinder Utility for z/OS. This allows relabeling and recataloging entries and avoids issues associated with duplicate volume names in the mainframe environment.

◆ Compatibility with mainframe security mechanisms such as RACF.

◆ Integration with many mainframe-specific ISVs and their respective products.

TimeFinder/Mirror for z/OS

TimeFinder/Mirror for z/OS provides BCVs and the means by which a mainframe application can manipulate them. BCVs are specially tagged logical volumes manipulated by using these TimeFinder/Mirror commands: ESTABLISH, SPLIT, RE-ESTABLISH, and RESTORE.

Mainframe-specific features of the TimeFinder/Mirror product include:

◆ The TimeFinder Utility for z/OS, which conditions the catalog by re-labeling and re-cataloging entries, thereby avoiding issues associated with duplicate volume names in the mainframe environment.

◆ The ability to create dependent-write consistent BCVs locally or remotely (with the plug-in module called TimeFinder/ Consistency Group) without the need to quiesce production jobs.

◆ BCV operations important to IS departments include:

• Using the BCV as the source for backup operations.

• Using the BCV for test LPARs with real data. The speed with which a BCV can be reconstituted means that multiple test cycles can occur rapidly and sequentially. Applications can be staged using BCVs before committing them to the next application refresh cycle.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 89: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

EMC Foundation Products

• Using the BCV as the source for data warehousing applications rather than the production volumes. Because the BCVs are a point-in-time mirror image of the production data, they can be used as golden copies of data to be written and rewritten repeatedly.

◆ The use of SRDF/Automated Replication.

◆ The support for mainframe TimeFinder-based queries including the use of wildcard matching.

◆ The compatibility with mainframe security mechanisms such as RACF.

◆ The integration of DBMS utilities available from ISVs and their products.

◆ The integration of many mainframe-specific ISVs and their products.

TimeFinder/CG

TimeFinder/CG (Consistency Group) is a plug-in module for the TimeFinder/Mirror, TimeFinder/Clone, and TimeFinder/Snap products. TimeFinder/CG provides consistency support for various TimeFinder family commands. TimeFinder/CG is licensed separately and uses a Licensed Feature Code implementation model.

Some benefits and use cases include:

◆ BCVs for parallel processing while production work continues on standard volumes.

◆ Compatibility with STK and IBM snapshot products.

◆ RAID 1, RAID 5, and RAID 10 protection.

◆ BCV clone emulation for RAID 5-protected devices means preservation of JCL procedures. BCV clone emulation on mainframes has certain microcode prerequisites that should be verified before implementation of this feature.

TimeFinder family products for z/OS 89

Page 90: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

90

EMC Foundation Products

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 91: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

3

This chapter presents these topics:

◆ Overview................................................................................................... 92◆ SRDF/A history ...................................................................................... 95◆ Tolerance mode ...................................................................................... 106◆ SRDF/A single session mode point in time....................................... 108◆ SRDF/A single session mode states..................................................... 110◆ SRDF/A single session mode delta set switching.............................. 112◆ SRDF/A single session mode state transitions................................... 119◆ SRDF/A single session cleanup process............................................. 126◆ SRDF/A single session mode recovery scenarios ............................. 127◆ SRDF/A Reserve Capacity enhancement: Transmit Idle ................. 129◆ SRDF/A Reserve Capacity enhancement: Delta Set Extension ...... 137◆ SRDF/A Multi-Session Consistency (MSC) mode............................ 146◆ SRDF/A MSC mode dependent-write consistency .......................... 147◆ SRDF/A MSC mode delta set switching ............................................ 151◆ SRDF/A MSC session cleanup process ............................................. 162◆ Using TimeFinder to create a restartable copy .................................. 164◆ Establish and using SRDF/A with Cascaded SRDF......................... 167◆ SRDF/A with SRDF/Extended Distance Protection ........................ 178◆ Mainframe Enabler 7.0 (SRDF Host Component 7.0) changes........ 181◆ Using SRDF/A write pacing ................................................................ 183

Understanding SRDF/Aand SRDF/A MSC

Consistency

Understanding SRDF/A and SRDF/A MSC Consistency 91

Page 92: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

92

Understanding SRDF/A and SRDF/A MSC Consistency

OverviewThis chapter begins with a high-level overview of SRDF/Asynchronous followed by a short history of SRDF/A through various Enginuity releases. Then a detailed explanation is provided about how the dependent-write consistency is maintained during the SRDF/A single session mode and SRDF/A Multi Session Consistency (MSC) mode cycle switches. This chapter also discusses how to create a DBMS restartable copy on either BCVs, BCVs in clone emulation mode, or native clone technology for primary and secondary use.

Note: Primary refers to the source volumes and secondary refers to the target volumes.

SRDF/A (Asynchronous) operationsSRDF/Asynchronous (SRDF/A) operations require an SRDF/A license and are supported with EMC ResourcePak Base and EMC Host Component software. Symmetrix systems running Enginuity 5670 and higher support SRDF/A mode for SRDF groups. This mode provides a dependent-write consistent, point-in-time image on the secondary devices that only slightly lags the primary devices. During the SRDF/A session, data is transferred to the remote Symmetrix systems in cycles or delta sets. Although most customers have chosen to adopt the cycle time default in Enginuity, it is important to point out that this cycle time can be modified to suit the above-mentioned lag time, thereby conforming it to specific RPO and RTO objectives; the appropriate SRDF/A product guide should be consulted for further details on this.

Note: The default configuration for SRDF/A is for the secondary volumes to be at most two cycles behind the data state of the primary volumes. In the default configuration for SRDF/A, this equates to 60 seconds; however, this duration may then vary depending upon the host system write workload, the available network bandwidth, and the specifics (cache size, number and type of devices, and number of connections) of the Symmetrix configuration at each site.

SRDF/A provides a long-distance replication solution with minimal impact on performance. It is engineered to preserve data consistency within applications and ensure conformance to specified business

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 93: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

goals among which is the need for a point-in-time copy of data at the secondary site. Dependent-write consistency is guaranteed on a delta set boundary. In the event of a disaster at the primary site, or if all SRDF links are lost during data transfer, the partial delta set of data is discarded; this preserves consistency on the secondary volumes with a maximum data loss of two SRDF/A cycles.

SRDF/A benefitsSRDF/A provides the following features and benefits:

◆ Supports extended data replication while preserving database and application consistency.

◆ Promotes efficient link utilization, possibly resulting in lower link bandwidth requirements.

◆ Maintains a dependent-write consistent point-in-time image on the secondary devices.

◆ Supports all current SRDF topologies (ESCON, FarPoint™, point-to-point and switched fabric Fibre Channel, and Gig-E).

◆ Requires no additional hardware, such as switches or routers.

◆ Supports hosts listed in the EMC Support Matrix for CKD and FBA data emulation types.

Note: The Support Matrix at http://EMC.com or http://Powerlink.EMC.com, or your EMC Sales Team can provide more information on currently supported hosts.

◆ Provides a write performance response time equivalent to writing to local (non-SRDF) devices.

◆ Allows failover and failback capabilities between the primary and secondary sites.

SRDF/A support of Dynamic SRDFThe ability to configure Dynamic SRDF-capable devices has been available since Enginuity 5567. Then the only functionality permitted was a personality swap between the primary and secondary volumes. That personality swap was restricted to non-FarPoint configurations.

Dynamic SRDF functionality was significantly enhanced in Enginuity 5568 to allow the creation, deletion, and swapping of SRDF pairs using EMC host-based SRDF software, while the Symmetrix system

Overview 93

Page 94: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

94

Understanding SRDF/A and SRDF/A MSC Consistency

was still in operation. With Dynamic SRDF, you can create SRDF device pairs from non-SRDF devices, and then synchronize and manage them in the same way as configured SRDF pairs.

Note: At Enginuity 5x71, SRDF/A devices also can be configured as Dynamic SRDF capable devices. Prior to Enginuity 5x71 this function was not available for SRDF/A.

Dynamic SRDF is supported over the following topologies:

◆ ESCON point-to-point connection (RA)

◆ Fibre Channel point-to-point connection (RF)

◆ Switched Fibre Channel fabric connection (RF)

◆ Gig-E connection (RE)

SRDF/A session statusSRDF/A session status is displayed as one of the following:

◆ Inactive — All of the devices in the SRDF group are either ready or not ready on the link.

◆ Active — SRDF/A mode is activated, and the SRDF/A delta sets are currently being transmitted using operational cycles to the secondary subsystem.

◆ Not Ready — Not Ready (NR) state-system startup. When the SRDF environment is configured, and the SRDF links come up, all SRDF volumes are in a not ready (NR) state by default. This means that all the remote devices on the primary Symmetrix are not-ready on the SRDF link. These devices can be made ready on the link by issuing the commands from the host software. By doing so, the state would transition to the inactive SRDF/A state.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 95: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A history This section describes the highlights of SRDF/A development in conjunction with its availability.

Enginuity 5670/SRDF/A single session

Enginuity 5670 supported single session mode SRDF/A. This configuration allowed a single SRDF group within a single primary Symmetrix system to participate in asynchronous mode to a single SRDF group contained within a single secondary Symmetrix system. This SRDF/A group could not use Dynamic SRDF.

Enginuity 5670.50 and later/SRDF/A (MSC)

Enginuity 5670.50 introduced the use of SRDF/A MSC for mainframe systems. The initial support was limited to a single SRDF/A group per Symmetrix system; however, multiple Symmetrix systems could participate in the SRDF/A MSC configuration.

This feature provided for a special mode of SRDF/A operation where cycle switching could be controlled by the host application using a Symmetrix system (call interface), and provided dependent-write consistency across several Symmetrix systems.

Enginuity 5X71

SRDF/A dual directional supportEnginuity 5671 allowed multiple SRDF/A groups (up to 64 depending on the configuration) per Symmetrix system to operate as either primary or secondary SRDF/A groups. This provided enhanced flexibility to configure and use SRDF/A with dual directional capability as shown in Figure 16 on page 96.

Note: Bidirectional operation within a single SRDF/A group is not supported. All primary-to-secondary data flow operations within a SRDF/A group must remain unidirectional.

SRDF/A history 95

Page 96: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

96

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 16 SRDF/A delta sets and their relationships

The cycle transitions within the same Symmetrix system, such as Capture to Transmit on the primary system, and Receive to Apply on the secondary system, are cache-memory-only operations that modify bit settings without incurring any overhead of actually copying data between the cycles. The only data copied during this process takes place between the Transmit and Receive (N-1) cycles as data is physically transmitted over the SRDF link between the primary and secondary sites.

SRDF/A Multi-Session Consistency (MSC)Enginuity 5670.50 and later initiated the use of SRDF/A Multi-Session Consistency (MSC) for mainframe. This initial support was limited to a single SRDF/A session allowed per Symmetrix, but multiple Symmetrix systems could participate in a SRDF/A MSC configuration. This feature provides for a special mode of SRDF/A operation where cycle switching is controlled by the host application using a Symmetrix system call interface, and could be used to provide dependent write point-in-time consistency across several Symmetrix systems.

Host Atarget

Host Bsource

Host Asource

Host Btarget

SRDF links

Primary Secondary

ICO-IMG-000219

Site A Host Site B Host

RA-1

RA-1RA-2

RA-2

Dual-directional configurationExtended distance solution

Two unidirectional SRDF RA sessions(primary and secondary in each Symmetrix system)

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 97: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Concurrent SRDF supportMultiple SRDF/A sessions (up to 64) are allowed per Symmetrix array. Implicit in this feature is the ability to define a primary for one SRDF/A group and a secondary to another SRDF/A group in the same Symmetrix DMX. By doing this, you now have the ability to use SRDF/A with dual directional operations.

Note: Bidirectional operation within a single SRDF/A group is not supported. All primary to secondary data flow operations within a SRDF/A group remain unidirectional

SRDF/StarSRDF/Star involves three sites (A, B, and C). The primary site (A), and one secondary site (B) contain equivalent data since they are connected in synchronous mode with SRDF/S. Sites A and C, connected in asynchronous mode and using SRDF/A, experience a data lag causing site C to be a maximum of two SRDF/A cycles behind site A. The data at site C is guaranteed consistent based on the SRDF/A properties discussed previously. Furthermore, site C may be located at a considerably farther distance from either A or B, presenting the customer with an enhanced set of options for recovery. Loss of A can result in recovery at either B, which contains current data up to the point of disaster, or C, which can be lagging as mentioned earlier. However, recovery at C can be made "data current" using an incremental update from B to C. SRDF/Star provides recovery flexibility at any one of the secondary sites in the event that the primary site is lost.

Dynamic SRDF supportActive SRDF/A supports Dynamic SRDF with a restriction that Dynamic SRDF be used to stage any addition or removal of Dynamic SRDF/A groups, and the addition or removal of devices from the SRDF/A groups. The changes take effect when SRDF/A is reactivated.

SRDF/A must be inactive, and the mode changed to adaptive copy prior to adding or removing a device from an SRDF/A group. The added device pairs require full synchronization. Adaptive copy mode is recommended to facilitate bulk data transfer.

To remove a device from an SRDF/A session, the device must be made not ready and all outstanding I/Os in that session for that device must be drained or discarded.

SRDF/A history 97

Page 98: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

98

Understanding SRDF/A and SRDF/A MSC Consistency

To remove a Dynamic SRDF/A group:

◆ The group must contain zero devices◆ The SRDF/A session must be inactive◆ Consistency must be disabled

Configurable cache utilization and drop priorityThis host software configurable feature allows individual SRDF/A processes, single session or MSC, to set the cache usage limit to a percentage of the system write pending limit. Current SRDF/A cycles can grow until they reach the Symmetrix system write pending limit. If this level is exceeded, the SRDF/A session will drop according to drop priorities (if DP is set). Without this feature, if system write pending limits are exceeded, performance across the entire Symmetrix system, not just the SRDF/A applications, may be affected.

This host software configurable feature also provides the ability to define which SRDF/A groups to drop first if cache resources are stressed, thereby rendering the capability to effectively assign a priority to sessions. This keeps SRDF/A active as long as possible.

Drop priority is a mechanism for controlling cache usage. If the percent of cache SRDF/A is exceeded, the Drop Priority determines the order, from high to low priority, in which SRDF/A sessions are dropped in this Symmetrix system in order to alleviate the condition. The highest priority is 1; the lowest is 64.

SRDF/A Transmit Idle SRDF/A Transmit Idle enables asynchronous replication to remain active in the event all links are temporarily lost due to network outages. The SRDF/A active time depends on how long it takes for the system to exceed either the system write pending or the customizable SRDF/A maximum cache limit.

Enginuity 5772

SRDF/A Reserve CapacitySRDF/A Reserve Capacity enhances SRDF/A’s ability to maintain an active state when encountering network resource shortfalls that would have previously dropped SRDF/A. With SRDF/A Reserve Capacity functions enabled, additional resource allocation can be applied to address temporary workload peaks, periods of network congestion, or even temporary network outages. The two functions

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 99: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

upon which SRDF/A Reserve Capacity is implemented are Transmit Idle and Delta Set Extension (DSE); these two function together to maximize SRDF/A’s replication availability while minimizing operational complexity.

SRDF/A Delta Set Extension (DSE) enables SRDF/A to remain active if system cache resources near the system write pending or SRDF/A reach maximum cache limit. This is accomplished by off loading some or all of the active delta set data that needs to be transmitted to the secondary site into preconfigured storage pools on the primary system.

SRDF/A Transmit Idle and Delta Set Extension have the ability to work together to improve the overall resiliency of SRDF/A during workload and network resource imbalances.

Note: SRDF/A Reserve Capacity is not intended to solve fundamentally unbalanced configurations.

Enginuity 5772.79.71

128 RDF sessions per Symmetrix arrayThe number of SRDF sessions that can be created per Symmetrix array has been increased from 64 to 128. To accommodate this increase, the number of SRDF sessions that can be configured on an RAF (Remote Fibre) port has increased from 16 to 32.

Enginuity 5773

Enginuity 5773 supports the Symmetrix Direct Matrix Architecture DMX-3 and DMX-4 storage arrays, and contains features that provide increased storage utilization and optimization, enhanced replication capabilities, and greater interoperability and security, as well as multiple ease-of-use improvements.

Cascaded SRDFOne such feature providing new replication capabilities is Cascaded SRDF, which supports a three-site disaster recovery configuration. The core benefit behind a “cascaded” configuration is its inherent capability to continue replicating with minimal user intervention from the secondary site to a tertiary site with SRDF/A in the event

SRDF/A history 99

Page 100: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

100

Understanding SRDF/A and SRDF/A MSC Consistency

that the primary site goes down. This enables a faster recovery at the tertiary site, provided that is where the customer is looking to restart their operation.

Prior to Enginuity 5773, an SRDF device could be a primary device (R1 device) or a secondary device (R2 device); however it could not be in the dual role simultaneously. Cascaded SRDF is a new three-site disaster recovery configuration where data from a primary site is synchronously replicated to a secondary site, and then asynchronously replicated to a tertiary site.

Cascaded SRDF introduces a new SRDF R21 device. The R21 device will assume dual roles of primary (R1) and secondary (R2) device types simultaneously. Data received by this device as a secondary can automatically be transferred by this device as a primary (according to the possible modes).

Moving dynamic RDF device pairsPreviously, moving a dynamic SRDF device pair from one SRDF group to another required the deletion of the existing pair followed by the creation of the same pair in the new SRDF group. While successful in moving the device pair, this process resulted in a full resynchronization of the data being performed.

Enginuity 5773 allows this move to occur without the deletion of the existing dynamic SRDF pair, thus avoiding a full resynchronization.

SRDF/A R2 Timestamp RepresentationWhen using SRDF/A, it is important to constantly monitor adherence to the Recovery Point Objective (RPO). Previously, when querying an SRDF/A enabled SRDF group, the “time that R2 is behind R1” for the R2 data only displayed while the SRDF/A session was active. It was only available when querying from a host connected to the R1 array.

With Enginuity 5773, the Enginuity microcode running on the Symmetrix DMX will maintain the time that the R2 is behind the R1. The information will be stored on both the R1 and R2 Symmetrix DMX and will be available when querying the SRDF/A devices from a host connection to the R2 Symmetrix DMX.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 101: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Configure static Concurrent SRDFPreviously, Concurrent SRDF pairings could only be created or removed if the devices being managed were configured to be dynamic SRDF capable. Solutions Enabler 6.5 introduces new syntax for the symconfigure command to allow Concurrent SRDF pairs to be managed for static RDF.

IPv6 supportInternet Protocol version 6 (IPv6) is a network layer protocol for packet-switched internetworks. Improving on IPv6, IPv6 is a much larger address space that allows greater flexibility in assigning addresses. IPv6 is able to support 2128 addresses. This extended address length eliminates the need to use network address translation to avoid address exhaustion, and also simplifies aspects of address assignment and renumbering when changing providers.

Enginuity 5773 introduces support for IPv6 on Symmetrix DMX-3 and DMX-4 storage arrays for SRDF on Gigabit Ethernet directors.

IPSec supportIPSec is an Internet Protocol standard (RFC 2401) - a framework of open standards that allow a user to create IP network tunnels through an existing IP network. All traffic contained within this tunnel is afforded some configurable measure of protection against hostile exterior entities.

Enginuity 5773 offers IPSec support for SRDF on the Symmetrix DMX-3 and DMX-4 gigabit Ethernet directors.

Enginuity 5874

EMC Enginuity version 5874 was the initial release to support the EMC Symmetrix VMAX. Symmetrix VMAX is an innovative platform built around a scalable design. This new product in the Symmetrix product line is the embodiment of the simple, intelligent, modular storage strategy.

250 SRDF groupsIn Enginuity versions 5772 and 5773, SRDF groups could be assigned group numbers between 1 and 250, with a maximum of 128 groups created per Symmetrix and a maximum of 32 SRDF groups volume on any one SRDF director. Enginuity 5874 now allows up to a

SRDF/A history 101

Page 102: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

102

Understanding SRDF/A and SRDF/A MSC Consistency

maximum of 250 SRDF groups to be created, still numbered between 1 and 250, with a maximum of 64 SRDF groups on any one SRDF director.

SRDF/Extended Distance Protection (SRDF/EDP)Previously, EMC introduced a new replication capability for cascaded SRDF (Symmetrix Remote Data Facility) that supported a three-site disaster recovery configuration. The core benefit behind a "cascaded" configuration is its inherent capability to continue replication with minimal user intervention from the secondary site to a tertiary site with SRDF/A in the event that the primary site goes down. This enabled a faster recovery at the tertiary site, provided that is where the customer is looking to restart the operation.

Available with Enginuity 5874, SRDF/Extended Distance Protection (SRDF/EDP) is a new two-site disaster restart solution that enables customers the ability to achieve no data loss at an out-of-region site at a lower cost. Using cascaded SRDF as the building block for this solution, combined with the use of the new diskless R21 data device at an intermediate (pass-through) site Symmetrix system, provides data pass through to the out-of-region site using SRDF/A.

SRDF/A adding or removing devicesPrior to Enginuity 5874, adding a volume or volumes to, or removing from, an existing SRDF/A enabled SRDF group required that all volumes in the group be suspended, causing the SRDF/A session to be deactivated. Subsequently, after the volume has been added, all volumes in the SRDF group remain in a state of “syncinprog” until all invalid tracks for all volumes, including the newly added volume(s), are cleared and two more cycle switches occur. During this process there was no way to determine which volumes in the group may or may not be in a consistent state.

Newly introduced in Enginuity 5874 is the SRDF/A consistency exempt feature with SRDF/A adding or removing devices. This consistency exempt feature provides the ability to dynamically add and remove volumes from an active SRDF/A session without affecting the state of the SRDF/A session or the reporting of the SRDF pair state for each of the volumes in the active session that are not the target of the add or remove operation. This is achieved by marking the volumes being added or removed as “exempt” from being considered when calculating the consistency state of the volumes in the SRDF/A session or when deciding if the SRDF/A session should be dropped to maintain dependent write consistency on the R2 side.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 103: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Setting the consistency exempt flag on a volume allows the volume to be added or removed from an active SRDF/A SRDF group using either a create, a delete or a move operation without requiring the other volumes in the SRDF group to be suspended prior to the operation.

SRDF/Star with an R22 deviceA new SRDF volume type is introduced in Enginuity 5874, a concurrent R2 (R22). A concurrent R2 volume is one whose two remote mirrors are paired with a different R1 volume. However, only one of the R2 mirrors may be enabled on an SRDF link and receiving data from its corresponding R1 volume at any given time. The primary intended use for an R22 volume is to simplify failover situations and improve resiliency in SRDF/Star environments. With the introduction of the R22 volume, Star setup can include the creation of recovery volume pairings on the recovery and so negating the need to create these pairings during an SRDF/Star failover or switch event. The availability of an R22 volume also simplifies swap operations in cascaded SRDF configurations.

SRDF Timestamp for Suspend/ResumeEnginuity 5874 introduces a new reporting field for SRDF volume pairs. When using Solutions Enabler 7.0 to query an SRDF pair or SRDF volume group a timestamp will be reported indicating when the status of the volume pair last changed. The current link status will indicate whether this last status change caused data transfer to be suspended or resumed on the link. The time of last action will be reported regardless of whether the query was issued from the R1 side or the R2.

Remote TimeFinder/Clone to SRDF R1 RestoreIn Enginuity 5773 and earlier, a SRDF restore operation could not be initiated if the R2 volume was the target of a TimeFinder/Clone restore while the restore was in progress. The clone restore had to complete prior to performing the SRDF restore. Starting with Enginuity 5874, the R2 volume can be used to restore to its partnered R1 volume while a clone restore is in progress to the R2.

SRDF/A history 103

Page 104: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

104

Understanding SRDF/A and SRDF/A MSC Consistency

Enginuity 5874 Q4'09 Service ReleaseEnginuity version 5874 Q4'09 SR is the latest Enginuity release supporting the Symmetrix VMAX storage arrays. Enginuity version 5874 features new software functionality such as Fully Automated Storage Tiering (FAST) and enhancements to current products including Virtual Provisioning and SRDF.

SRDF 8-Gigabit FC and FICON ConnectivitySymmetrix VMAX has been enhanced to support new 8 Gigabit front end connectivity for improved of both Fibre Channel and FICON connected hosts. These connections will also increase the bandwidth available for SRDF connectivity.

Enginuity-based compression for SRDFSRDF data may now be compressed using software compression under the control of SRDF dynamic parameters. Enable and disable controls are available per SRDF group under a policy definition. The decision to use compression is made per individual I/O based on the policy setting for that specific SRDF group.

Enginuity 5874.180Enginuity 5874.180 lowers the SRDF/A maximum cache usage percentage from 94% to 74%. This change was implemented in Enginuity 5874.180 and is now the default for all subsequent VMAX Enginuity releases. For paired DMX systems, it is highly recommended that users explicitly lower the previously configured, or defaulted, maximum cache usage percentage to 74% to match the associated VMAX system behavior.

Enginuity 5875EMC Symmetrix VMAX series with Enginuity incorporates a scalable fabric interconnect design that allows the storage array to seamlessly grow from an entry-level configuration to a 2 PB system. Symmetrix VMAX provides predictable, self-optimizing performance and enables organizations to scale out on demand in Private Cloud environments. It automates storage operations to exceed business requirements in virtualized environments, with management tools that integrate with virtualized servers and reduce administration time in private cloud infrastructures. Customers are able to achieve

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 105: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

"always on" availability with maximum security, fully non-disruptive operations and multi-site migration, recovery, and restart to prevent application downtime.

Enginuity 5875 for Symmetrix VMAX extends customer benefits in the following areas:

◆ More Efficiency — Zero-downtime tech refreshes with Federated Live Migration, and lower costs with automated tiering

◆ More Scalability — Up to 2x increased system bandwidth, with the ability to manage up to 10x more capacity per storage admin

◆ More Security — Built-in Data at Rest Encryption

◆ Improved application compatibility — Increased value for virtual server and mainframe environments, including improved performance and faster provisioning for z/OS servers

SRDF/A Write PacingSRDF/A write pacing provides an additional method of extending the availability of SRDF/A by preventing conditions that result in Symmetrix cache exhaustion. Distinctive from the existing mechanisms, SRDF/A Write Pacing is dynamic. The write pacing feature detects when SRDF/A I/O service rates are lower than host I/O rates and takes corrective action to slow down host I/O rates to match the slower service rate. This includes detecting spikes in host write I/O rates and slowdowns in both transmit and R2-side restore rates. In this way, monitoring and throttling of host write I/O rates can control the amount of cache used by SRDF/A, which prevents the cache from becoming exhausted on both the primary (R1) and secondary (R2) sides.

The write pacing feature offers a group pacing option enabled for the entire SRDF/A group and a device pacing option enabled for an individual SRDF/A R1 volume whose R2 partner on the secondary system participates in TimeFinder operations. Both write pacing options are compatible with each other and with other SRDF/A features such as tunable cache utilization and Reserve Capacity. EMC host-based SRDF software allows users to enable/disable each write pacing option.

Concurrent SRDF/AConcurrent SRDF/A allows both R1 SRDF mirrors of the same R11 volume to operate in asynchronous mode, even though each R1 SRDF mirror belongs to a different SRDF/A group.

SRDF/A history 105

Page 106: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

106

Understanding SRDF/A and SRDF/A MSC Consistency

Tolerance modeTolerance mode permits SRDF/A to operate when certain conditions occur that would normally drop SRDF/A.

When tolerance mode is set, point-in-time dependent-write consistency is not guaranteed.

SRDF/A MSC allows tolerance mode to be turned on, but if it is turned on for any SRDF/A group within the MSC configuration, the results may be inconsistent for the entire MSC group.

The host software to implement tolerance mode is different for mainframe and open systems. The mainframe software exports the use of tolerance mode directly, while open systems software externalizes it through enabling/disabling consistency mode.

Tolerance mode is intended to be used primarily for service actions such as drive replacements.

Note: The EMC Symmetrix SRDF Host Component for z/OS Product Guide provides more information.

SRDF/A single session mode refers to the implementation of SRDF between one primary Symmetrix system using one SRDF group, to one secondary Symmetrix system using one SRDF group. However, multiple SRDF groups running SRDF/A in single session mode are allowed within each Symmetrix system beginning with Enginuity 5x71.

Symmetrix systems, which implement asynchronous mode host writes from the primary Symmetrix system to the secondary Symmetrix system, use point-in-time dependent-write consistent delta sets transferred in cycles. This differs from traditional ordered write asynchronous approaches in three ways:

◆ First, each delta set contains groups of writes for processing, which are managed for dependent-write consistency by the Enginuity operating environment.

◆ Second, SRDF/A transfers the sets of data using cycles of operation, one cycle at a time, between the primary Symmetrix system and the secondary Symmetrix system.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 107: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

◆ Third, if the same block of data is overwritten more than once (the principle of locality of reference) within an active cycle on the primary Symmetrix system, SRDF/A sends only the last update of the block over the SRDF link; this is referred to as the principle of write folding. Write folding lowers the required link bandwidth, often substantially as compared to other ordered write processing approaches that transfer each write separately.

Tolerance mode 107

Page 108: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

108

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A single session mode point in timeSRDF/A single session mode is the implementation of SRDF/A from a single SRDF/A group on the primary Symmetrix system to a single SRDF/A group on the secondary Symmetrix system. Enginuity controls the cycle switching without any host software involvement.

As shown in Figure 17, point-in-time dependent-write consistency is achieved through the processing of ordered SRDF/A delta sets (cycles) between the primary Symmetrix system and the secondary Symmetrix system:

◆ The active cycle on the primary Symmetrix system contains the current host writes designated as the N data version in the capture delta set.

◆ The inactive cycle contains the N-1 data version that is transferred using SRDF/A from the primary Symmetrix system to the secondary Symmetrix system. The primary inactive cycle is the transmit delta set and the secondary Symmetrix system inactive cycle is the receive delta set.

◆ The active cycle on the secondary Symmetrix system contains the N-2 data version in the apply delta set. This is the guaranteed point-in-time dependent-write consistent image that can be readily used in the event of a disaster or failure at the primary site.

Figure 17 SRDF/A delta sets and their relationships

ApplyN-2

CaptureN

TransmitN-1

R2R1

R1 R2

ReceiveN-1

ICO-IMG-000194

Primary Symmetrix Secondary Symmetrix

Capture“Active”cycle

Apply“Active”cycle

Transmit“Inactive”

cycle

Receive“Inactive”

cycle

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 109: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Dependent-write consistency is ensured within SRDF/A through the use of the host adapter (HA on the Symmetrix system) obtaining the active cycle number from a single location in Symmetrix global memory, assigning it to each I/O as it is received by the HA, and retaining it in association with that I/O even if a cycle switch occurs during the life of that I/O (the time for the I/O to complete).

This results in the cycle switch process being an atomic event for dependent-write sequences, even though it is not physically an atomic event across a range of volumes. As a result, two I/Os with a dependent-write relationship will either be in the same cycle, or in subsequent cycles.

SRDF/A single session mode point in time 109

Page 110: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

110

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A single session mode statesFigure 18 shows the three logical states in which SRDF/A can operate:

◆ Not Ready (NR) / Target Not Ready (TNR)

◆ Inactive

◆ Active

Figure 18 SRDF/A single session allowed state transitions

Not Ready/Target Not Ready state—system startupWhen the SRDF environment is configured, and the SRDF links are activated, all SRDF volumes are in a not ready (NR) state by default. All the devices on the primary Symmetrix system are not ready on the SRDF link. These devices can be made ready on the link by issuing the command RDF-RSUM from the host software. The state would then transition to the inactive SRDF/A state.

Inactive stateIn inactive state, devices are ready on the link, SRDF/A is inactive, and all devices function in their assigned modes (synchronous or adaptive copy write pending/disk mode). Various commands can transition SRDF/A from an active state to an inactive state. Most of these commands maintain a dependent-write consistent copy on the secondary Symmetrix system, and only one of them “DEACT_TO_XXXX” does not.

ICO-IMG-000265

NR

All devices are NotReady on the links

Remote site consistentor inconsistent(asynchronous)

Active

Synchronous, semi-synchronous, or

adaptive copy modes

Inactive

Host command

or Enginuity

Hostcommands

Hostcommands

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 111: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Active stateThe active state is the normal running state of SRDF/A. The secondary Symmetrix system is either consistent or inconsistent. The consistent active state represents a point-in-time dependent-write consistent image of the data. The inconsistent active state represents previously owed tracks that have not yet been transferred to the secondary Symmetrix system. Dependent-write consistency is not maintained for these owed tracks.

Owed tracks are a result of new device pairs being created, or because of the resolution of differences in data created by primary volume writes while the primary/secondary relationships were suspended for any reason, including a previous SRDF/A drop.

SRDF/A declares the secondary Symmetrix system consistent once all previously owed tracks from the primary Symmetrix system have been transferred to the secondary Symmetrix system devices. Specifically, when the last cycle containing this data is fully copied to global memory and to the N-2 cycle (apply delta set) on the secondary Symmetrix system.

SRDF/A single session mode states 111

Page 112: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

112

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A single session mode delta set switchingThis section examines in detail how the delta set switching works for SRDF/A single session mode. The diagrams assume SRDF/A has been activated and two cycle switches have already occurred. Before a primary Symmetrix system cycle switch can occur, two conditions must be satisfied:

1. The transmit delta set must have completed transferring the data to the secondary Symmetrix system.

2. The minimum cycle time must be reached.

Figure 19 depicts application writes being collected in the capture delta set on the primary Symmetrix system. The previous cycle’s transmit delta set completes the SRDF transfer to the receive delta set in the secondary Symmetrix system; this is the N-1 copy. Also, the secondary Symmetrix system’s apply delta set applies marked write pending to the secondary devices, which is the N-2 copy.

Figure 19 Capture delta set collects application writes

Figure 20 on page 113 illustrates the primary Symmetrix system waiting for the minimum cycle time to elapse, and the transmit delta set emptying all of the data transferred to the secondary Symmetrix system. Once these conditions are satisfied, the primary Symmetrix system sends a commit message to the secondary Symmetrix system, so that the secondary Symmetrix system cycle switch is done in unison with the primary Symmetrix system cycle switch.

ApplyN-2

CaptureN

TransmitN-1

R2N-2

R1N

R1N R2

N-2

ReceiveN-1

ICO-IMG-000183Primary Symmetrix Secondary Symmetrix

1

1. Capture delta set (DS) collectsapplication write I/O

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 113: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 20 Transmit delta set empties

Figure 21 illustrates an SRDF transfer halted prior to a primary Symmetrix system cycle switch.

Figure 21 Transfer is halted prior to primary Symmetrix system cycle switch

Figure 22 on page 114 illustrates that the primary Symmetrix system cycle switch (step 2c) occurs between the capture and transmit delta set. This is done automatically through Enginuity, since it is a single session (single SRDF group) SRDF/A environment, and occurs in unison with the cycle switch in the secondary Symmetrix system as illustrated in Figure 24 on page 115.

ApplyN-2

CaptureN

Transmit

R2N-2

R1N

R1N R2

N-2

ReceiveN-1

ICO-IMG-000184Primary Symmetrix Secondary Symmetrix

1

2

1. Capture delta set (DS) collectsapplication write I/O

2. Primary waits for the minimum cycle time, and for the Transmit DS to emptya) Primary tells Secondary to commit the Receive

DS (begins Secondary step 3 in unison)

ApplyN-2

CaptureN

Transmit

R2N-2

R1N

R1N R2

N-2

ReceiveN-1

ICO-IMG-000185Primary Symmetrix Secondary Symmetrix

1

2

1. Capture delta set (DS) collectsapplication write I/O

2. Primary waits for the minimum cycle time, and for the Transmit DS to emptya) Primary tells Secondary to commit the Receive

DS (begins Secondary step 3 in unison)

b) SRDF transfer halted

2b

SRDF/A single session mode delta set switching 113

Page 114: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

114

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 22 Primary Symmetrix system delta set switch

Figure 23 illustrates a new capture delta set made available to continue receiving new host writes.

Figure 23 New capture delta available for host writes

Before a secondary Symmetrix system cycle switch can occur (see Figure 24 on page 115) two conditions must be met:

1. The secondary Symmetrix system must receive the commit message from the primary Symmetrix system (step 3a).

2. The apply delta set (N-2 copy) must complete its restore process and mark the data write pending to the secondary devices (step 3a).

Once the secondary Symmetrix system receives the commit message from the primary Symmetrix system, it verifies that the apply delta set has been restored (data has been marked write pending to the secondary devices). This occurs while the primary Symmetrix system performs the cycle switch between its capture and transmit delta sets (step 2c).

ApplyN-2

CaptureN

TransmitN

R2N-2

R1N

R1N R2

N-2

ReceiveN-1

ICO-IMG-000186Primary Symmetrix Secondary Symmetrix

1

2

1. Capture delta set (DS) collectsapplication write I/O

2. Primary waits for the minimum cycle time, and for the Transmit DS to emptya) Primary tells Secondary to commit the Receive

DS (begins Secondary step 3 in unison)

b) SRDF transfer halted2b

2c

c) Primary cycle switch occurs – Capture DS becomes the Transmit DS

ApplyCaptureN

TransmitN-1

R2R1N

R1N R2

ReceiveN-2

d) New Capture DS available for Host I/O

ICO-IMG-000187

Primary Symmetrix Secondary Symmetrix

1

2

1. Capture delta set (DS) collectsapplication write I/O

2. Primary waits for the minimum cycle time, and for the Transmit DS to emptya) Primary tells Secondary to commit the Receive

DS (begins Secondary step 3 in unison)

b) SRDF transfer halted2b

2c

2d

2d

c) Primary cycle switch occurs – Capture DS becomes the Transmit DS

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 115: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 24 Secondary Symmetrix system waits for apply delta set to be restored

As shown in Figure 25, the next step is a delta set cycle switch on the secondary Symmetrix system between the receive (inactive) and apply (active) delta sets. This preserves the dependent-write consistent copy at the secondary Symmetrix system prior to receiving the next dependent-write consistent copy.

Figure 25 Secondary Symmetrix system delta set switch

ApplyCaptureN

TransmitN-1

R2

R1N

R1N R2

ReceiveN-2

d) New Capture DS available for Host I/O

ICO-IMG-000188

Primary Symmetrix Secondary Symmetrix

1

2

1. Capture delta set (DS) collectsapplication write I/O

2. Primary waits for the minimum cycle time, and for the Transmit DS to emptya) Primary tells Secondary to commit the Receive

DS (begins Secondary step 3 in unison)

b) SRDF transfer halted2b

2c

2d

3a

2d

c) Primary cycle switch occurs – Capture DS becomes the Transmit DS

3. Secondary receives commit from Primarya) Check if the data in Apply DS is restored (data

marked write pending to the R2 devices)

ApplyN-2

CaptureN

TransmitN-1

R2

R1N

R1N R2

ReceiveN-2

d) New Capture DS available for Host I/O

ICO-IMG-000189

Primary Symmetrix Secondary Symmetrix

1

2

1. Capture delta set (DS) collectsapplication write I/O

2. Primary waits for the minimum cycle time, and for the Transmit DS to emptya) Primary tells Secondary to commit the Receive

DS (begins Secondary step 3 in unison)

b) SRDF transfer halted2b

2c

2d

3a

3b2d

c) Primary cycle switch occurs – Capture DS becomes the Transmit DS

3. Secondary receives commit from Primarya) Check if the data in Apply DS is restored (data

marked write pending to the R2 devices)b) Secondary cycle switch –

Receive DS becomes Apply DS

SRDF/A single session mode delta set switching 115

Page 116: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

116

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 26 illustrates that an empty delta set is available for the SRDF transfer of data (step 3c).

Figure 26 Secondary Symmetrix system new receive delta set available for SRDF

ApplyN-2

CaptureN

TransmitN-1

R2

R1N

R1N R2

Receive

d) New Capture DS available for Host I/O

ICO-IMG-000190

Primary Symmetrix Secondary Symmetrix

1

2

1. Capture delta set (DS) collectsapplication write I/O

2. Primary waits for the minimum cycle time, and for the Transmit DS to emptya) Primary tells Secondary to commit the Receive

DS (begins Secondary step 3 in unison)

b) SRDF transfer halted2b

2c

2d

3a

3b

3c

2d

c) Primary cycle switch occurs – Capture DS becomes the Transmit DS

3. Secondary receives commit from Primarya) Check if the data in Apply DS is restored (data

marked write pending to the R2 devices)b) Secondary cycle switch –

Receive DS becomes Apply DS c) New Receive DS available for SRDF transfer

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 117: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 27 illustrates that the secondary Symmetrix system sends an acknowledgement to the primary Symmetrix system. The secondary Symmetrix system then begins the commit process for the data in its apply delta set.

Figure 27 Secondary Symmetrix system begins restore of apply delta set

ApplyN-2

CaptureN

TransmitN-1

R2N-2

R1N

R1N R2

N-2

Receive

d) New Capture DS available for Host I/O

ICO-IMG-000191

Primary Symmetrix Secondary Symmetrix

1

2

1. Capture delta set (DS) collectsapplication write I/O

2. Primary waits for the minimum cycle time, and for the Transmit DS to emptya) Primary tells Secondary to commit the Receive

DS (begins Secondary step 3 in unison)

b) SRDF transfer halted2b

2c

2d

3a

3b

3e

3c

2d

c) Primary cycle switch occurs – Capture DS becomes the Transmit DS

3. Secondary receives commit from Primarya) Check if the data in Apply DS is restored (data

marked write pending to the R2 devices)b) Secondary cycle switch –

Receive DS becomes Apply DS c) New Receive DS available for SRDF transfer

d) Secondary sends Primary acknowledgement

e) Begin restore of Apply DS

SRDF/A single session mode delta set switching 117

Page 118: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

118

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 28 (step 4a) illustrates the SRDF transfer of data from the primary Symmetrix system’s transmit delta set to the secondary Symmetrix system’s receive delta set.

Figure 28 Primary Symmetrix system begins SRDF transfer

ApplyN-2

CaptureN

TransmitN-1

R2N-2

R1N

R1N R2

N-2

ReceiveN-1

d) New Capture DS available for Host I/O

4. Primary receives acknowledgement of Secondarycycle switcha) SRDF transfer begins

ICO-IMG-000192

Primary Symmetrix Secondary Symmetrix

1

2

1. Capture delta set (DS) collectsapplication write I/O

2. Primary waits for the minimum cycle time, and for the Transmit DS to emptya) Primary tells Secondary to commit the Receive

DS (begins Secondary step 3 in unison)

b) SRDF transfer halted2b, 4a

2c

2d

3a

3b

3e

3c

2d

c) Primary cycle switch occurs – Capture DS becomes the Transmit DS

3. Secondary receives commit from Primarya) Check if the data in Apply DS is restored (data

marked write pending to the R2 devices)b) Secondary cycle switch –

Receive DS becomes Apply DS c) New Receive DS available for SRDF transfer

d) Secondary sends Primary acknowledgement

e) Begin restore of Apply DS

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 119: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A single session mode state transitionsThis section provides a general overview of how SRDF/A single session mode transitions between states.

Figure 29 illustrates the transition path that is referenced in the following sections.

Figure 29 SRDF/A single session transition path

Note: For specific commands required to switch modes, refer to Chapter 5, “Implementation of SRDF/A.”

Switching to SRDF/A modeThe Host Component software can be used to switch to asynchronous mode.

Note: SRDF/A is an SRDF group-level feature, meaning that all devices assigned to an SRDF group configured to operate in asynchronous mode will only operate in that mode when the SRDF/A state is active.

Synchronous SRDF/A

Adaptive CopyDisk

Adaptive CopyWP

Adaptive Copypend off

and SRDF/A

ICO-IMG-000262

SRDF/A single session mode state transitions 119

Page 120: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

120

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF verifies that all the devices are ready, then transitions to the active state on both the primary and secondary Symmetrix systems. Consequently, the delta sets are established on both the primary and secondary Symmetrix systems, and the SRDF/A mechanism is activated.

Tolerance mode must be set to off to ensure dependent-write consistency when running in asynchronous mode. Throughout this chapter it is always assumed that the systems are running with tolerance mode set to off. If tolerance mode is set to on, the SRDF/A cycle switch process will continue; but dependent-write consistency is not guaranteed.

Transitioning from synchronous to asynchronousWhen switching from SRDF synchronous mode, all devices are already synchronized, the secondary Symmetrix system indicates a consistent state, and confirms that the data is dependent-write consistent.

If there were previously owed tracks to be copied, there is no point-in-time dependent-write consistency at the secondary Symmetrix system until the last owed track has been sent to the secondary Symmetrix system, and is in the N-2 cycle (apply delta set). This happens when the disk adapter (DA) places the track owed to the secondary Symmetrix system in the capture delta set(s) and SRDF/A cycle switching occurs until that track is in the apply delta set. Once this occurs, SRDF/A indicates that the state is consistent, which means the data is dependent-write consistent.

Note: It is recommended to capture a dependent-write consistent copy (locally and/or remotely) on a set of BCVs or clones prior to performing this process.

Transitioning from adaptive copy write pending mode to asynchronousWhen the mode is set to SRDF/A from adaptive copy write pending mode, all devices are moved into the adaptive copy pending off mode. With SRDF/A active, when the DA scans a device in the pending off mode, rather than creating a separate SRDF queue record, it adds the slot to the active cycle (capture delta set). If there are not any slots left write pending to the SRDF mirror that are not in the SRDF/A cycle, the device can transition out of the pending off mode. Once all devices transition out of pending off mode, two cycle switches are required for the secondary Symmetrix system to report a consistent state and have the data be dependent-write consistent.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 121: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Transitioning from adaptive copy disk mode to SRDF/AThe length of time to transmit the owed tracks to the secondary Symmetrix system with asynchronous mode depends on the number of tracks outstanding in addition to the available network bandwidth between the systems.

SRDF/A produces a consistent state on the secondary Symmetrix system and a dependent-write consistent copy of data only after all track-based resynchronization operations are complete, and two additional cycle switches have occurred. However, each cycle switch (new capture delta set) limits the number of copy I/Os to 30,000 tracks divided by the number of active sessions in the system to avoid cache exhaustion in the primary Symmetrix system.

Note: When Delta Set Extension (DSE) is paging out, the number of copy I/Os allowed is reduced to 10% of what is described above.

The recommendation as to the best synchronization approach for a customer to take is based on two primary drivers: (1) Operational simplicity and, (2) Minimal time to secondary consistency. Synchronization options supporting each of these goals are described in the reminder of this section.

Operational simplicity, less concern for immediate consistencyFor some customers, the time to become secondary consistent may not be a hard requirement; in this way, they may not have an immediate need for secondary consistency. For these customers, additional time spent synchronizing is of lesser importance than the overhead of monitoring track synchronization rates and switching modes at the appropriate time.

For these customers, best practices as adopted by several large EMC customers, suggests that asynchronous mode may be adopted immediately, thereby allowing Enginuity to perform the required track-based operations as required to bring the systems into synchronization. The time required to accomplish the synchronization, and hence consistency, may be in the order of a few additional hours, but it is achieved with little or no monitoring, and hence, operationally simple.

Minimize time to secondary consistency, some operational effortFor other customers, the time to secondary consistency is the primary driver and should be minimized at a cost of some operational effort. Customer experiences in this regard suggest that (given the efficiency

SRDF/A single session mode state transitions 121

Page 122: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

122

Understanding SRDF/A and SRDF/A MSC Consistency

of SRDF adaptive copy mode at bulk data transfer), it is advantageous to utilize this mode for a significant portion of the initial synchronization, thereby expediting the synchronization process and achieving secondary consistency by a final switch into asynchronous mode.

Note: In some cases, EMC support personnel may be able to tune specific adaptive copy mode parameters as necessary to effect for maximum utilization of link bandwidth.

The speed of adaptive copy mode synchronization depends primarily on the link bandwidth and the system configurations. The only factor remaining would be to determine when best to switch from adaptive copy to asynchronous mode; the ideal time in this case would be switching to asynchronous mode once the number of outstanding invalid tracks reaches 30,000, or less (the capacity limit for an individual SRDF cycle). In this case, the remaining tracks could then be processed within a single cycle and secondary consistency would be achieved immediately after two additional cycle switches have occurred (transferring these final outstanding tracks to the secondary systems disk).

Note: For some customers and configurations, it may be sufficient to switch to asynchronous mode once the rate of decrease of the invalid tracks exhibits signs of slowing, that is, reaching a plateau, or leveling off. This approach requires an additional amount of manual monitoring, but (under these conditions) will support the objective of the shortest time to consistency.

Switching to SRDF/S mode from SRDF/A single session mode

With Enginuity 5x71 and above, it is possible to transition to a synchronous state from SRDF/A without losing dependent-write consistency. However, this is only allowed for SRDF/A single session mode, and the following rules apply:

◆ The transition is not immediate. Once a transition is requested, it may take some time for the SRDF/A mode to be transitioned into SRDF/S (synchronous).

◆ Some performance degradation occurs with synchronous mode while the transition takes place.

◆ Both the primary and secondary Symmetrix systems must be running Enginuity 5x71 or higher.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 123: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Note: It is recommended that you capture a dependent-write consistent copy (locally or remotely) on a set of BCVs or clones prior to performing this process.

Coming out of the SRDF/A active stateThis section only applies to an SRDF/A active state that is consistent, meaning that the data is in a dependent-write consistent state.

SRDF/A supports several methods of dropping out of the active state into the Not Ready state. In order to maintain a dependent-write consistent copy, two options are discussed; stopping SRDF/A during a cycle, or stopping SRDF/A at the end of the current cycle. The mainframe software and Enginuity refer to these as drop and pend-drop, respectively.

Note: It is recommended that the resulting dependent-write consistent data be obtained with either a set of BCVs or clones prior to any resynchronization. During the resynchronization activity, the dependent-write consistent image at the secondary Symmetrix system is compromised.

The third option, while not a recommended option, is to simply remove SRDF/A from the active state, and transition the SRDF mode to another state without preserving the dependent-write consistency at the secondary Symmetrix system.

Note: An active state with an inconsistent secondary Symmetrix system implies that accumulated copy I/Os are still transferred, and a dependent-write consistent image cannot be created on the secondary Symmetrix system with either method of dropping SRDF/A.

Dropping SRDF/A single session mode during the cycleDropping out of SRDF/A single session mode during the cycle places the devices in a Not Ready state immediately, and the current cycle cannot complete. This results in tracks being owed on both the primary and secondary Symmetrix systems of the SRDF relationship; resuming SRDF requires that these tracks are resolved in the normal manner. However, dropping out of asynchronous mode during the cycle does not compromise the dependent-write consistency of the data at the secondary Symmetrix system.

SRDF/A single session mode state transitions 123

Page 124: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

124

Understanding SRDF/A and SRDF/A MSC Consistency

Enginuity initiated dropEnginuity may drop an SRDF/A single session mode for a number of reasons. Some of these reasons and their likely causes are as follows:

◆ The primary Symmetrix system’s write pending, or SRDF/A maximum cache limit was reached:

• Bandwidth not sized properly

• Global memory not sized properly• Implementation (whole configuration) not sized properly for

the workload• Secondary Symmetrix system device write pending limit

reached (unbalanced configuration)• Any combination of the above

◆ The secondary Symmetrix system devices were made NR on both the link and primary side:

• Done by host command• Due to excessive device or link errors

◆ All links were lost:

• Done manually• External network or network equipment issues• Excessive link errors

Despite the reasons above, these drops still preserve a dependent-write consistent image on the secondary Symmetrix system.

SRDF/A single session mode—use of PEND_DROP“PEND_DROP” places the devices in a Not Ready state only at the end of the current in-process cycle. Write-pending tracks in the active cycle are converted to tracks owed on the primary Symmetrix system only. By dropping SRDF/A on the cycle boundary, there is no need to resolve owed tracks when SRDF/A is resumed. This process will necessitate a track level resynchronization of the primary to secondary SRDF/A volumes. During this resynchronization, the secondary SRDF/A volumes will be inconsistent. See Section 6.1 for a detailed example of resuming after a PEND_DROP.

However, any new writes on the primary Symmetrix system result in those tracks marked as owed to the secondary Symmetrix system. All of these tracks are owed to the secondary Symmetrix system once the links are activated Deactivating SRDF/A single session mode.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 125: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A single session mode offers the option of moving out of the active state immediately while leaving the SRDF devices ready on the link. Because the devices are left ready on the SRDF link, data continues to flow, and the dependent-write consistency of the data at the secondary Symmetrix system is compromised. The data in the capture and transmit delta sets is marked as owed tracks to the secondary Symmetrix system; similar to a resynchronization operation. These owed tracks however, are not point-in-time dependent-write consistent.

SRDF/A single session mode state transitions 125

Page 126: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

126

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A single session cleanup processEnginuity automatically begins a cleanup process once the SRDF/A single session mode is dropped. The primary Symmetrix system marks new incoming writes as being owed to the secondary Symmetrix system, and does a “cleanup” of the capture and transmit delta sets while the capture and transmit delta sets are discarded, the data is marked as owed to the secondary Symmetrix system. All of these owed tracks are sent to the secondary Symmetrix system once SRDF is resumed if the desired copy direction is primary to secondary.

The secondary Symmetrix system marks and discards the receive delta set. The data is marked as tracks owed to the primary Symmetrix system. Once SRDF is resumed, the scheduled tracks are sent from the primary Symmetrix system if the copy direction has not changed.

The secondary Symmetrix system ensures that the apply (N-2) delta set is successfully applied to disk; this is the dependent-write consistent image.

It is important to capture a “gold” copy of the dependent-write consistent data on the secondary Symmetrix system devices prior to any resynchronization because the resynchronization process could compromise the dependent-write consistent image. The “gold” copy can be captured on the secondary system set of BCVs or clones. This is discussed later in this chapter.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 127: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A single session mode recovery scenariosThis section describes the different recovery scenarios associated with SRDF/A single session mode.

Temporary link loss

If SRDF/A suffers a temporary loss (less than 10 seconds by default) on all of the SRDF links, the SRDF/A state remains active and data continues to accumulate in global memory on the primary Symmetrix system. This may result in an elongated cycle, but the secondary Symmetrix system dependent-write consistency is not compromised, and the primary and secondary Symmetrix system device relationship is not suspended. The SRDF Link Limbo time-out value is a preset amount of time (default of 10 seconds) that SRDF will wait until it declares a permanent loss of all links; this can be reconfigured through a Symmetrix system change.

A switch to SRDF/S mode with the link loss time configured for more than 10 seconds may result in an application, database, or host failure if SRDF is restarted in synchronous or semi-synchronous mode. Refer to the Transmit Idle section in the EMC SRDF Host Component for z/OS Product Guide.

Non-temporary link lossIf SRDF/A experiences a permanent loss of all links, it drops all of the devices on the link to a not ready state (TNR) that results in all data in the active and inactive primary Symmetrix system cycles (capture and transmit delta sets) being changed from write pending for the secondary Symmetrix system to owed to the secondary Symmetrix system. In addition, any new writes on the primary Symmetrix system are marked as owed to the secondary Symmetrix system. All tracks marked as owed to the secondary Symmetrix system are sent over once the links are restored.

On the secondary Symmetrix system, the inactive cycle data (receive delta set) is marked as owed to the primary Symmetrix system, and the active cycle data (apply delta set) completes its commit to the secondary Symmetrix system devices.

SRDF/A single session mode recovery scenarios 127

Page 128: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

128

Understanding SRDF/A and SRDF/A MSC Consistency

When the links are restored, normal SRDF recovery procedures are followed. The track tables are compared and merged based on normal host recovery procedures used by the host software. The data is then resynchronized by sending the owed tracks as part of the SRDF/A cycles.

Note: The data on the secondary Symmetrix system devices is always dependent-write consistent in SRDF/A active/consistent state, even when the SRDF links have failed. However, the act of starting a resynchronization activity will compromise the dependent-write consistency until the resynchronization is complete and two cycle switches have occurred. For this reason, it is recommended that a “gold” copy of the dependent-write consistent image is saved using either a set of BCVs or clones on the secondary Symmetrix system.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 129: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A Reserve Capacity enhancement: Transmit IdleThis section discusses Transmit Idle.

Transmit Idle overview

The main premise of SRDF/A is to provide a secondary site dependent-write consistent point-in-time image; an image that results in minimal data loss in the event of a disaster at the primary site.

Figure 30 SRDF/A delta set architecture

Enginuity functionalityThis section provides additional details regarding the Symmetrix system SRDF/A Transmit Idle enhancement.

Beginning with Enginuity 5x71 (with maintenance) and above, an enhancement called SRDF/A Transmit Idle provides a function in which the loss of all links for an SRDF/A session would place the session into a state referred to as Transmit Idle. This Transmit Idle state ensures that the secondary devices remain ready on the link, even though the SRDF links for the SRDF/A groups are physically down. The result is that data is held in the primary subsystem and not sent to the secondary subsystem during a brief link outage, and SRDF/A does not drop; this provides for an automatic recovery that does not require user intervention.

N-2Active

NActive

N-1Inactive

N-1Inactive

ICO-IMG-000263

Primary Secondry

12

3

Capture TransmitApply

Delta SetReceiveDelta Set

1. Host writes into active cycle N2. Inactive cycle N-1 sends data to target3. Completed cycles (N-2) validated

SRDF/A Reserve Capacity enhancement: Transmit Idle 129

Page 130: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

130

Understanding SRDF/A and SRDF/A MSC Consistency

From a functional perspective, Transmit Idle is a resiliency enhancement to EMC’s SRDF/A feature that provides SRDF/A with the capability of dynamically and transparently extending the Capture, Transmit, and Receive phases of the SRDF/A cycle while masking the effects of an “all SRDF links lost” event. Without the SRDF/A Transmit Idle enhancement, an “all SRDF links lost” event would normally result in the abnormal termination of SRDF/A. The SRDF/A Transmit Idle enhancement has been specifically designed to prevent such occurrences.

Enginuity prerequisitesThis section lists the Enginuity requirements necessary to support the SRDF/A Transmit Idle enhancement.

The Enginuity requirements are listed in Table 3.

Other key Enginuity requirements, limitations, and recommendations include:

◆ Need to acquire the appropriate SRDF or SRDF/A licenses or both.

◆ SRDF/A Transmit Idle is not supported on ESCON RAs.

◆ SRDF/A Transmit Idle needs to be enabled at both the primary and secondary sites.

Refer to EMC Knowledge base emc148716 for more information.

Overall considerationsThe goal of this enhancement is to provide an additional level of resiliency during short SRDF/A “all SRDF links lost” events. In many cases, the SRDF “all SRDF links lost” state lasts only one to two minutes, and this is where the assistance provided by SRDF/A

Table 3 Mainframe Host Component Enginuity requirements

Enginuity level Release level

5772 and later Initial

5771 94.102

5771 92.99

5671 60.65

5671 59.64

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 131: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Transmit Idle is most effective. When active, this enhancement presents an SRDF/A environment in the Transmit Idle state. The use of SRDF Transmit Idle assumes that the primary side SRDF/A cache limitation is not exceeded, and that careful planning has already taken place to introduce SRDF/A into the environment.

SRDF/A Transmit Idle is not a solution for all causes of SRDF/A drops. It addresses only short link outages when the Symmetrix system cache is not constrained. It is important to note that SRDF/A Transmit Idle does not replace or change the existing SRDF Link Limbo parameter discussed earlier. SRDF/A Transmit Idle goes into effect only after the existing SRDF Link Limbo timer has expired.

SRDF/A Transmit Idle is of limited benefit if the Symmetrix system is already cache- constrained while running SRDF/A. While in the Transmit Idle state, SRDF/A continues to collect writes from the host and stores them in cache of the primary Symmetrix system. If the primary Symmetrix system is running close to either its maximum system write-pending cache limit, or the SRDF/A maximum cache use value when set, during normal operation of SRDF/A, then it is highly probable that SRDF/A will drop. In this case, the drop is caused by a “cache full” condition that takes place shortly after all links are lost and SRDF/A Transmit Idle is invoked.

SRDF/A Transmit Idle should not be used until it is enabled on both sides of the SRDF link since enabling SRDF/A Transmit Idle on only one side of the link will not prevent SRDF/A from dropping with “all SRDF links lost” type events. It is important to note that EMC host-based management software (in this case, Host Component) must also be available on both sides of the SRDF link.

Balancing SRDF/A Transmit IdleSRDF/A Transmit Idle performs optimally when the SRDF/A environment is properly balanced between and among Symmetrix systems on either side of the SRDF links. The following SRDF/A component configurations must be balanced:

◆ Types of DMX subsystems in use on each side of the link

◆ Comparable subsystem cache sizes and allocation to SRDF/A

◆ SRDF link bandwidth sized appropriately to the requirements of the applications and the volume of data being replicated

◆ The number of drives per subsystem, the drive type and capacities, and the data protection schemes in use

SRDF/A Reserve Capacity enhancement: Transmit Idle 131

Page 132: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

132

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A Transmit Idle is not a substitute for ensuring that the SRDF network quality between the sites is sufficient to meet an application’s replication objectives.

Applicable settingsActivation and deactivation of the SRDF/A Transmit Idle enhancement is performed at the SRDF/A group level, using the host software (refer to the EMC Symmetrix SRDF Host Component for z/OS Product Guide). SRDF/A Transmit Idle can be set for a non-SRDF/A group in anticipation of SRDF/A being used for that group at some time in the future.

SRDF/A Transmit Idle is activated by enabling it on the primary and secondary Symmetrix systems, using the SRDF Host Component commands with the appropriate parameters as follows:

◆ NEVER SET (the default, disabled)—results in the feature being disabled

◆ OFF (disabled)—results in the feature being disabled

◆ ON (enabled)—enables the Transmit Idle state to be invoked upon the loss of all SRDF links for the SRDF group. Transmit Idle remains in effect until at least one of the SRDF links for the group becomes usable again, or until the primary subsystem cache resources are exhausted, or until customer settings related to the SRDF/A’s cache usage on the primary Symmetrix are exceeded.

Note: If Transmit Idle has never been set and the secondary Symmetrix (of an SRDF pairing) is at or above Enginuity 5772.79, Transmit Idle will also be set on automatically for you.

Invocation and upgradingSRDF/A Transmit Idle is invoked dynamically. For this reason, an Enginuity upgrade is allowed while SRDF/A Transmit Idle is active.

When SRDF/A Transmit Idle is invoked, operations that require the SRDF link connectivity may fail, just as they would without SRDF/A Transmit Idle being active. In this case, appropriate error messages are displayed as expected. For example, certain SRDF/A group level maintenance actions may be affected, such as replacing the last RA in an SRDF/A group.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 133: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Usage considerationsWhen an SRDF/A group is in the Transmit Idle state, all SRDF links for the group are unavailable, and remote requests do not have any usable paths. As such, any host software operations that generate remote requests will fail with an appropriate error message while Transmit Idle is active.

Once SRDF/A Transmit Idle is enabled, all hosts running SRDF/A control software must be upgraded to a level capable of providing support for the Transmit Idle enhancement.

When using Transmit Idle, there is no longer a need to set the SRDF Link Limbo parameter to a value greater than the 10-second default. Customers who currently have SRDF Link Limbo set to a value greater than 10 seconds must change it back to 10 seconds when they enable SRDF/A Transmit Idle.

Testing considerationsThis section provides additional best practices to support initial SRDF/A Transmit Idle testing. To test Transmit Idle (the ability to enter the Transmit Idle state), choose any one of the following options:

◆ Disconnect the cables connecting the RA directors to the network at either side of the SRDF group connection.

◆ Disconnect the cables connecting the RA directors at the switch or router at either side of the SRDF group connection.

◆ Disable the zone set to which the RA directors supporting the SRDF group are configured.

◆ Disable the ports on the network extension equipment for the RA directors supporting the SRDF group.

◆ Disable the ports on the network extension equipment that will sever all connections for the SRDF group between the Symmetrix systems, such as ISL or DWDM network ports (specific to the network setup at any given installation).

If all RA directors that support an SRDF group in either Symmetrix system are brought offline (using EMC service personnel or SRDF control software), the SRDF/A sessions supported by those RA directors will drop. This is by design. The SRDF/A Transmit Idle feature is designed to allow an SRDF/A session to continue to be

SRDF/A Reserve Capacity enhancement: Transmit Idle 133

Page 134: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

134

Understanding SRDF/A and SRDF/A MSC Consistency

active in the event of an unplanned network outage that causes an interruption to all the SRDF group connections between the Symmetrix systems.

If the RA directors in a Symmetrix system are put in an offline state intentionally, it is assumed that the SRDF/A sessions supported by those RA directors are also dropped.

Host Component interface to SRDF/A Transmit Idle

Mainframe software prerequisitesThis section lists the mainframe software requirements necessary to support the SRDF/A Transmit Idle enhancement. Host Component and ResourcePak Base requirements are detailed in Table 4.

Note: As this enhancement continues to evolve, additional fixes may be required. Stay current with maintenance and process information.

Specific Enginuity requirements can be found in Table 3 on page 130.

Mainframe interface detailsKey facts regarding the mainframe Transmit Idle interface are as follows:

◆ Transmit Idle can be set to on or off using the SC SRDFA command.

◆ The SQ SRDFA command may be used to determine if the Transmit Idle feature is enabled. It describes Transmit Idle status (that is, SRDF/A is active, SRDF/A devices are ready on the link; however, the link is down).

Table 4 Mainframe Host Component requirements

Host Component ResourcePak BaseRequired Host Component and ResourcePak Base maintenance levels

5.3 5.5 SR53057, SF55065

5.3 5.6 SR53057, SF56013

5.4 5.5 SR54004, SF55065

5.4 5.6 SR54004, SF56013

5.5 5.7 Included in base release

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 135: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

◆ MSC prevents startup of an MSC group if one or more SRDF groups are in Transmit Idle status.

◆ MSC displays Transmit Idle status if there is a temporary “all SRDF links lost” condition for any of the SRDF/A groups in the MSC session.

◆ MSC appears normal when exiting Transmit Idle status after one of the links for the SRDF/A groups resumes.

◆ The normal DROP command does not work when the status is Transmit Idle.

◆ A new SC SRDFA DROP_SIDE command is added to SRDF HC.

The following operations use the mainframe interface:

◆ Turn Transmit Idle on:

#SC SRDFA,LCL(cuu,ragroup#),TRANSMIT_IDLE,ON

◆ Turn Transmit Idle off:

#SC SRDFA,LCL(cuu,ragroup#),TRANSMIT_IDLE,OFF

◆ Drop SRDF/A if Transmit Idle is active:

#SC SRDFA,LCL(cuu,ragroup#),DROP_SIDE

Refer to Figure 31 and Figure 32 on page 136.

08.38.19 S0041609 EMCMN00I SRDF-HC : (4) ¢¢SQ SRDFA,LCL(07FC,16)

EMCQR00I SRDF-HC DISPLAY FOR (4) ¢¢SQ SRDFA,LCL(07FC,16) 491 MY SERIAL # MY MICROCODE ------------ ------------ 000000006134 5671-59 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 16 Y F 36 000000006143 5671-59 G(R1>R2) SRDFA A MSC KCH16AC DYNAMIC AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSCKCH ) ------------------------------------------------------------------PRIMARY SIDE: CYCLE NUMBER 15,451 MIN CYCLE TIME 30SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 0 TRANSMIT CYCLE SIZE 0AVERAGE CYCLE TIME 22 AVERAGE CYCLE SIZE 0TIME SINCE LAST CYCLE SWITCH 12 DURATION OF LAST CYCLE 15MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94HA WRITES 3,132,696 RPTD HA WRITES 4,930HA DUP. SLOTS 397 SECONDARY DELAY 27LAST CYCLE SIZE 0 DROP PRIORITY 33

SRDF/A Reserve Capacity enhancement: Transmit Idle 135

Page 136: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

136

Understanding SRDF/A and SRDF/A MSC Consistency

CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( Y ) MSC ACTIVE ( Y ) ACTIVE SINCE 10/24/2006 08:34:11 CAPTURE TAG C0000000 00000011 TRANSMIT TAG C0000000 00000010 GLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) ------------------------------------------------------------------END OF DISPLAY

Figure 31 SQ SRDF/A display (primary side) showing Transmit Idle is ON

EMCMN00I SRDF-HC : (6) ¢¢SQ SRDFA,LCL(07FC,16) EMCQR00I SRDF-HC DISPLAY FOR (6) ¢¢SQ SRDFA,LCL(07FC,16) 320 MY SERIAL # MY MICROCODE ------------ ------------ 000000006134 5671-59 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 16 N ? ? 000000006143 SRDFA T MSC ----------------------------------------------------------------------PRIMARY SIDE: CYCLE NUMBER 15,804 MIN CYCLE TIME 30SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 0 TRANSMIT CYCLE SIZE 0AVERAGE CYCLE TIME 15 AVERAGE CYCLE SIZE 0TIME SINCE LAST CYCLE SWITCH 14 DURATION OF LAST CYCLE 15MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94HA WRITES 3,132,696 RPTD HA WRITES 4,930HA DUP. SLOTS 397 SECONDARY DELAY 29LAST CYCLE SIZE 0 DROP PRIORITY 33CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( Y ) MSC ACTIVE ( Y ) ACTIVE SINCE 10/24/2006 08:34:11CAPTURE TAG C0000000 00000172 TRANSMIT TAG C0000000 00000171 GLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) ------------------------------------------------------------------END OF DISPLAY

Figure 32 SQ SRDF/A display (primary side) showing Transmit Idle is ACTIVE

Possible values for the Transmit Idle field in the SQ SRDFA display are:

◆ SRDFA T IDLE (SRDF/A single session mode in Transmit Idle)

◆ SRDFA T MSC (SRDF/A MSC mode in Transmit Idle)

◆ SRDFA T STAR (SRDF/Star mode in Transmit Idle)

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 137: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A Reserve Capacity enhancement: Delta Set ExtensionThis section discusses Delta Set Extension.

SRDF/A Delta Set Extension overview

Enginuity 5772 provides an additional option for managing the buffering of delta set data: SRDF/A Delta Set Extension (DSE). DSE provides a mechanism for augmenting the cache-based delta set buffering mechanism of SRDF/A with a disk-based buffering ability. This extended delta set buffering ability allows SRDF/A to ride through larger or longer SRDF/A throughput imbalances than are possible with cache-based delta set buffering alone.

There are a number of advantages to having SRDF/A ride through the conditions described above without dropping:

◆ Lower demands on remote link bandwidth — If an SRDF/A session drops, the session must be resynchronized for protection operations to resume. The resynchronization process is driven by an invalid track table that specifies which full tracks need to be sent across the links; this process may be subject to some amount of data inflation since only some of the data on a track may have changed. The amount of this inflation depends on the host-write block size and the degree of locality of reference in the workload.

◆ Lower RPO — If an SRDF/A session incurs a link outage, the time required for the next cycle switch is longer than if the link had remained active. The cycle elongates because there is some amount of time required to initiate and perform the resynchronization and, as noted above, the resynchronization may inflate the amount of data that must be sent over the links.

◆ Operational simplicity — SRDF/A’s ability to remain active eliminates the operational process normally required to return a link outage session to its online state.

DSE can be configured for any SRDF/A session, and within any configuration in which SRDF/A is a participant, including SRDF/Star, and Concurrent SRDF in which one remote mirror is participating in an SRDF/A group. DSE is designed to preserve the major benefits of SRDF/A such as: the negligible impact on host-write response time, the use of write folding to reduce remote link bandwidth requirements, and the robust options for managing consistency.

SRDF/A Reserve Capacity enhancement: Delta Set Extension 137

Page 138: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

138

Understanding SRDF/A and SRDF/A MSC Consistency

DSE theory of operationThe DSE enhancement uses disk-based buffering to augment the cache-based delta set buffering provided by SRDF/A. Even when DSE is enabled for a given SRDF/A session (which can be configured on a per-session basis), the system will still attempt to maintain the delta set data for that session in cache as long as there is sufficient cache available. When the number of cache slots used by the given session reaches a designated level, DSE transfers delta set data from the cache to delta set save devices on disk. At that point, DSE continues to off load delta set data from cache to disk in an effort to reduce the number of slots used by the delta sets for that session to the predesignated level.

Delta set data that has been paged out to disk will eventually be brought back into cache using bulk transfers if Enginuity decides there is a sufficient number of available cache slots. Delta set data is also brought back into cache if Enginuity detects that “stalling” is imminent. The stalling state is not desired because there is no flow of data across the remote links and no data marked pending for destage to the secondary devices. Since only delta set data that is resident in cache can be sent over the remote links, Enginuity will page in just enough data into cache to prevent the stall.

Delta set save devices and save poolsThe DSE paging operations, page-ins and page-outs, transfer delta set data between the cache and the delta set save devices. The delta set save devices are eligible for use on a SRDF/A session that belong to a delta set save pool associated with that session. A delta set save pool is a collection of delta set save devices of a particular emulation type. The type of the delta set data to be paged out must match the type of the destination delta set save device, and must be either FBA or CKD.

Delta set save devices and delta set save pools have the following properties:

◆ A delta set save device must be created through a Symmetrix system configuration change. These Symmetrix special “SAVDEV” devices do not have channel addressability and hence cannot be addressed by any attached host.

◆ A delta set save device can belong to only one delta set save pool at any given time.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 139: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

◆ Delta set save devices cannot be accessed by hosts or used as Snap save devices. While configured as a delta set save device, the device can be used only by DSE.

◆ A delta set save device can use RAID 1, RAID 5 (3+1) or RAID 5 (7+1) protection. (RAID 10 and metadevices are not permitted; RAID 6 is not recommended.)

◆ Each SRDF/A session can be associated with zero or one delta set save pool of each type (FBA, 3390).

◆ Multiple SRDF/A sessions can use the same delta set save pool.

Typically, the delta set save pools associated with an SRDF/A session using DSE are configured prior to activating DSE. However, it is possible to add or remove delta set save devices from delta set save pools that are currently used by DSE.

Before a given delta set save device (in use by DSE) is removed from a pool, there must be no paged out data present on the device. This is accomplished by draining the device, which entails copying the delta set data on that device to other devices in the pool. Once the draining operation completes and there is no paged delta set data on the device, it can be removed from the pool. Alternatively, a delta set save device can be deactivated, in which case DSE no longer pages out any data to the device and the system waits until normal page-in operations free the device of all paged delta set data before the device is allowed to be removed.

DSE activation and deactivationDSE can be enabled or activated for an SRDF/A session when a valid delta set save pool configuration has been established and the SRDF/A session is active. DSE can also be configured on a per-SRDF/A session basis so that it automatically activates once SRDF/A is activated. DSE can be manually deactivated at any time while SRDF/A is active.

DSE must be independently configured on both the primary and secondary subsystems. If DSE is invoked on any one subsystem to ride through an SRDF/A throughput imbalance, then both attached subsystems must be able to accommodate the resulting elongated cycles.

If an SRDF/A session has DSE activated, and is forced to drop because of an unplanned error event, or because the SRDF/A session is manually dropped without the pending option, then all data belonging to that session and in the capture, transmit, or receive delta

SRDF/A Reserve Capacity enhancement: Delta Set Extension 139

Page 140: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

140

Understanding SRDF/A and SRDF/A MSC Consistency

sets is converted to invalid tracks. This occurs even if the corresponding delta set data is paged out at that time. To ensure write-order consistency, the subsystem guarantees that data in the apply delta set is destaged to the secondary devices.

Examples of unplanned errors that can cause an SRDF/A session, with DSE active, to be abruptly dropped are:

◆ The SRDF/A cache utilization level reaches the system limit (and host throttling is not enabled). As noted earlier, this is possible even with DSE enabled.

◆ Subsystem or site power loss exceeds one minute.

If pend-drop is performed on an SRDF/A session that has DSE activated, the session waits until the end of the current cycle before deactivating. This is the same result that occurs if DSE was inactive at the time the pend-drop was issued, however, it is possible that the page-in activity from disk needs to occur before the SRDF/A session deactivates.

Page-out thresholdDSE decides to page out delta set data for a given SRDF/A session if DSE is enabled for the session and the following condition is met.

The total number of cache slots used by all SRDF/A sessions in the system (whether the sessions are using DSE or not), when expressed as a percentage of the system write pending limit, exceeds the session page-out threshold of the session.

DSE continues paging out delta set data from the session as long as this condition persists. The time required to bring the total SRDF/A cache utilization level below the session page-out threshold depends largely on one or more of the following:

◆ The number of active SRDF/A sessions with DSE enabled.

◆ Delta set save pool throughput—the speed at which the delta set data can be written to the delta set save pool.

◆ The overall capacity of the delta set save pool.

If the SRDF/A cache utilization level increases despite the efforts of the page-out activity to lower it, it is possible that the page-out threshold of additional DSE-enabled SRDF/A sessions with higher page-out thresholds may also be exceeded. The result is an increase in the number of cache slots eligible to be paged out and a potential increase in the number of delta set save pools targeted by page-out operations.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 141: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

The page-out threshold, like all other aspects of a DSE configuration, is configured separately on each of the primary and secondary subsystems. The potential exists for the page-out threshold of some sessions to be exceeded on the primary subsystem, but not on the secondary subsystem and the other way round. However, if either side exceeds any of its page-out thresholds for a sufficiently long time, paging operations need to be invoked on both subsystems to handle the resulting cycle elongation.

When DSE decides to page out data, the potential exists for data from either of the two local SRDF/A delta sets (receive and transmit on the primary, and receive and apply on the secondary) on the subsystem in question to be paged out. Depending on the particular subsystem circumstances, DSE may apply a preference to page data from a particular delta set in the interests of optimizing performance and maximizing SRDF/A throughput.

The page-out threshold also influences the way that delta set data is subsequently paged in. If paged-out data exists for a session, the system performs bulk page-ins of the paged-out data belonging to that session if the following conditions are met:

◆ The page-out threshold for the session is not exceeded.

◆ There are sufficient cache slots to contain all of the delta set data for the session (including all of the paged-out data belonging to the session) without causing the page-out threshold for that session to be exceeded.

Note: DSE does not necessarily page in all of the paged-out data before suspending bulk page-in activity; this is just the criterion for deciding whether to initiate or continue bulk page-ins. If multiple sessions have paged out data, this criterion can be satisfied by both sessions even if there is not enough room in cache for all of the delta set data for both sessions at the same time.

If the conditions to perform bulk page-in are not met, it is still possible for DSE to page in data, but the amount of data paged in is limited to the minimum amount needed to prevent the lack of delta set data in cache from causing a stall on the R1 primary subsystem in the transfer of data over the remote links, or a stall in the R2 secondary subsystem in writing to the R2 secondary subsystem volumes. Bulk page-ins are also used to prevent stalling.

SRDF/A Reserve Capacity enhancement: Delta Set Extension 141

Page 142: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

142

Understanding SRDF/A and SRDF/A MSC Consistency

When setting the page-out threshold, note the recommendation that SRDF/A should have enough cache available to avoid paging under normal circumstances (that is, a nondegraded configuration). The higher the threshold, the less physical cache is required. However, with higher page-out thresholds, a configuration becomes more vulnerable to bursty I/Os whose arrival rate exceeds the service times DSE provides.

Note: The page-out threshold can be set on a per-session basis; however, it is recommended that all sessions within a given subsystem use the same value.

DSE cache overheadDSE maintains metadata in cache to keep track of data that has been paged out. For each track that DSE pages out, 1/512 of a cache slot is consumed for metadata.

Cycle switching with DSE enabledSRDF/A manages cycle switching in the same manner, regardless of whether DSE is active or not. Cycle switching occurs when all three of the following conditions are met:

◆ The time since the last cycle switch must be greater than or equal to the configured minimum cycle time.

◆ The transmission of data from the transmit delta set must have completed.

◆ The data in the apply delta set must be either destaged to the secondary volumes, or marked as write-pending to the secondary volumes.

When DSE is in use, the potential exists for data from any of the four SRDF/A delta sets to be paged out at any given time. For a cycle switch to occur, there can be no paged data in the transmit or apply delta sets since there must not be data remaining in these cycles. In contrast, a cycle switch can occur, while there is paged data in either, or both of, the capture or the receive delta sets, and no change is required on disk to this paged data in connection with the cycle switch.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 143: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

DSE interactions with other featuresDSE can be used for any SRDF/A session, and within any of the configurations in which SRDF/A is a participant, including SRDF/Star configurations, MSC configurations, and configurations involving local replicas attached to either the primary or secondary volumes or both. This section describes the use of DSE in conjunction with some of these other features.

Transmit Idle considerationsThe use of DSE alone will not allow an SRDF/A session to ride through a temporary loss of all remote links that persist for longer than the recommended Link Limbo time of 10 seconds. In order to ride through a temporary loss of all links, Transmit Idle is required on both the primary and secondary subsystems. The combination of Transmit Idle and DSE allow SRDF/A to ride through complete remote link losses that last much longer than those that can be handled by cache alone.

Note: If host throttling is enabled and the remote links used by an SRDF/A session are lost, and DSE is not enabled for that session (but Transmit Idle is), there is the potential for indefinitely long write delays.

MSC considerationsBoth the primary and secondary systems, in an SRDF/A relationship participating in an MSC session, must have DSE enabled to realize the resiliency benefit. If DSE is invoked on either side of any of the member SRDF/A sessions in the MSC session, both sides of all member SRDF/A sessions must be able to handle the potential cycle elongation.

The use of host throttling is currently not permitted when MSC is in use, whether or not DSE is configured.

Cache partitioning considerationsIn Enginuity 5772 (the initial DSE release), DSE and cache partitioning cannot be concurrent in the same system.

SRDF/A Reserve Capacity enhancement: Delta Set Extension 143

Page 144: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

144

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF group considerationsConfigurations with large numbers of sessions per delta set save pool may be at risk of a fragmented save pool. This results in a decrease in the degree of sequentiality for accesses to the pool, and impacts performance and throughput to the pool. The default behavior is to drop SRDF/A immediately when this condition occurs.

An SRDF/A imbalance may occur between the incoming write workload and the outgoing SRDF/A bandwidth, or there may be an inability to destage data quickly enough at the secondary Symmetrix system. This results in the global memory in the primary Symmetrix system becoming full. The inactive and active cycles (capture and transmit delta sets) on the primary Symmetrix system consume all the available write memory.

In this situation, the behavior of SRDF/A obeys the user-configurable settings as follows:

◆ The primary Symmetrix system can throttle the host to match the speed of the links, and keeps SRDF/A active. The host’s performance becomes equivalent to the performance of synchronous mode.

◆ The primary Symmetrix system throttles the host for a user-defined period of time, and if the condition does not resolve itself at the expiration of that time, then the SRDF/A sessions are dropped.

◆ To avoid a memory full condition, SRDF/A must be properly designed and configured. Factors and variables that may cause an imbalance in SRDF/A include bandwidth, global memory, unbalanced primary and secondary Symmetrix system configurations, and workload allocation for specific implementations. Contact EMC Customer Support to initiate a study of your environment to avoid this imbalance.

To assist in better managing global memory full conditions, Enginuity 5x71 introduced both configurable cache utilization and reserved capacity.

Failback from secondary Symmetrix system devicesIn the event of a disaster on the primary Symmetrix system, the data on the secondary Symmetrix system devices represents a dependent-write consistent image that can be used to restart an environment with controlled data loss. Once the primary Symmetrix

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 145: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

system has been repaired, the process for returning to the primary Symmetrix system is the same as that used for synchronous SRDF failback operations.

Once the workload has been transferred back to the primary Symmetrix system and all tracks owed to the primary subsystem (from the secondary subsystem) have been sent, SRDF/A can be activated, and normal asynchronous mode protection can be resumed.

In the event of an extended failover, the SRDF/A configuration can be reversed using either Dynamic SRDF or a configuration change. This is easily facilitated through SRDF/A’s ability to process until a planned reversal of direction can be performed again in order to restore the original SRDF/A primary/secondary relationship.

Note: Chapter 6, “Basic SRDF/A Operations,” contains more information on restarting the environment at the secondary site after failure at the primary site, while Chapter 7, “SRDF/A and SRDF/A MSC Return Home Procedures,” provides more information on returning to the primary Symmetrix system after the cause of the failure at that site has been repaired.

SRDF/A Reserve Capacity enhancement: Delta Set Extension 145

Page 146: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

146

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A Multi-Session Consistency (MSC) modeMainframe software and Enginuity 5670.50 and later support SRDF/A control for multiple Symmetrix systems if there is a single SRDF group per Symmetrix system.

Beginning with Enginuity 5x71 for mainframe and open systems, SRDF/A supports configurations where there are multiple primary Symmetrix systems, and/or multiple primary SRDF sessions connected to multiple secondary Symmetrix systems or multiple secondary Symmetrix system SRDF groups. This technology is referred to as SRDF/A Multi-Session Consistency, or SRDF/A MSC. SRDF/A MSC configurations also support the Enterprise environment—mixed open and mainframe systems—whose data is controlled within the same SRDF/A MSC session.

Achieving data consistency across multiple SRDF/A sessions simply requires that the cycle switch process described earlier in this chapter be coordinated among the participating Symmetrix systems or SRDF sessions. The cycle switch occurs during a very brief time period when no host writes are being serviced by the Symmetrix system. Achieving the cycle switch requires a single coordination point from which the cycle switch process could be driven in all participating Symmetrix systems; this function is provided by the SRDF/A MSC host software.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 147: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A MSC mode dependent-write consistencyFrom a single Symmetrix system perspective, I/O is processed exactly the same in SRDF/A MSC mode as in single session mode (as shown in Figure 33 on page 148):

1. The active cycles on all primary Symmetrix systems contain the current host writes or the N data version in the capture delta set.

2. The inactive cycles contain the N-1 data version that will be transferred using SRDF/A from each primary Symmetrix system to its secondary Symmetrix system. The primary Symmetrix system inactive delta set is the transmit delta set and the secondary Symmetrix system’s inactive delta set is the receive delta set.

3. The active cycles on the secondary Symmetrix systems contain the N-2 data version of the apply delta set. This is the guaranteed dependent-write consistent image in the event of a disaster or failure.

SRDF/A MSC mode dependent-write consistency 147

Page 148: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

148

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 33 SRDF/A MSC delta sets and their relationships

Entering SRDF/A Multi-Session Consistency

For the host to control the cycle switch process, the Symmetrix systems must be aware that they are running in MSC mode; this is done using the SRDF control software running on the host.

This host software performs the following:

◆ Coordination of the cycle switching for all SRDF/A sessions comprising the composite group enabled for consistency.

◆ Monitoring of the sessions for a failure to propagate data to the secondary Symmetrix system devices, and dropping all SRDF/A sessions together to maintain dependent-write consistency.

◆ Performing MSC cleanup when necessary.

ApplyN-2

CaptureN

TransmitN-1

CaptureN

TransmitN-1

CaptureN

TransmitN-1

R2R1

R1

R1

R1

R2

R2

R2

ReceiveN-1

ApplyN-2

ReceiveN-1

ApplyN-2

ReceiveN-1

ICO-IMG-000193

Primary Symmetrix Secondary Symmetrix

Capture“Active”cycle

Apply“Active”cycle

Transmit“Inactive”

cycle

Receive“Inactive”

cycle

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 149: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Simply activating SRDF/A does not place a session in multi-session mode. Exiting multi-session mode does not drop or deactivate SRDF/A, it merely places SRDF/A in single-session mode. However, if SRDF/A is dropped or deactivated, then multi-session mode is terminated and needs to be re-entered when SRDF/A is reactivated.

As part of the process to enter MSC mode, and with each cycle switch issued thereafter, the host software assigns a cycle tag to each capture cycle that is retained throughout that cycle’s life. This cycle tag is a value that is common across all participating SRDF/A sessions and eliminates the need to synchronize the cycle numbers across the sessions. This cycle tag is the mechanism by which dependent-write consistency is assured.

Figure 34 updates the SRDF/A state diagram from single session mode to incorporate multi-session mode for SRDF/A.

Figure 34 SRDF/A MSC allowed state transitions

ICO-IMG-000264

Multi-SessionConsistency

Remote site consistentor inconsistent

Not Ready (NR)

All devices are NotReady on the links

Remote site consistentor inconsistent

Active

Synchronous oradaptive copy modes

Inactive

Host commandor Enginuity

Host command

or Enginuity

Hostcommand

Hostcommand

Hostcommand

Hostcommand

SRDF/A MSC mode dependent-write consistency 149

Page 150: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

150

Understanding SRDF/A and SRDF/A MSC Consistency

Performing an SRDF/A MSC consistent cycle switchSRDF/A MSC mode performs a coordinated cycle switch during a very short window of time when there are no host writes occurring. This time period is referred to as an SRDF/A window.

When the host software discovers that all SRDF groups and Symmetrix systems are ready for a cycle switch, it issues a single command to each SRDF group that performs a cycle switch and opens the SRDF/A window. The SRDF/A window is implemented as a bit in the SRDF/A state table in global memory where the cycle number and tag are also stored.

The table is accessed by the host adapter to obtain the cycle number at the start of each write in single session mode. In multi-session mode, the host adapter also checks the SRDF/A window bit, and if it is on an open window, it disconnects from the channel and begins polling the bit to see if the host software closed the window. While the window is open, any I/Os that start are disconnected and, as a result, no dependent-write I/Os are issued by any host to any devices in the SRDF/A MSC group.

The SRDF/A window remains open for each SRDF group and Symmetrix system until the last SRDF session and Symmetrix system in the multi-session group acknowledges to the host software that the switch and open command has been processed. At this point, the host software issues a close command for each SRDF session under MSC control. As a result, dependent-write consistency across the SRDF/A MSC session is created.

Note: Enginuity provides a fail-safe mechanism to ensure that the window will not remain open permanently due to a host or host software failure. Enginuity will close the window if the host software has not closed it within 15 seconds.

As part of this switch and open operation, the host software assigns a cycle tag value to the active cycle (capture delta set). (This cycle tag value is separate from the cycle number assigned internally by SRDF/A.) This cycle tag is carried by the SRDF/A process to the secondary Symmetrix system and is used by the host software at the recovery site. This ensures that only data from the same host cycle is applied to the secondary Symmetrix system devices in each SRDF group and Symmetrix system in the event of a disaster.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 151: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

During this window, read I/Os complete normally to any devices that have not received a write. The SRDF/A window is an attribute of the SRDF/A session and is checked at the start of each I/O, at no additional overhead cost, because the host adapter already obtained the cycle number from global memory as part of SRDF/A’s normal operations.

SRDF/A MSC mode delta set switchingThis next series of figures represents three SRDF/A single mode sessions combined to create a single SRDF/A MSC session. There are two primary Symmetrix systems, one with a single SRDF/A session, and the other with two SRDF/A sessions. The two secondary Symmetrix systems are the same configuration as the source, yielding a balanced configuration.

This section examines how the delta set switching works for SRDF/A MSC mode. The following series of figures assumes that SRDF/A MSC has been activated and two cycle switches have occurred previously.

Note: When MSC is first enabled, it will do a 10 second cycle switch to coordinate all participating sessions. This is done to avoid extending cycle switch time for those single sessions that were near cycle switch boundary.

Before a primary Symmetrix system cycle switch can occur, two things must be achieved:

1. The primary Symmetrix system transmit delta set must be empty.

2. The secondary Symmetrix system apply delta set must have completed marking the secondary devices write pending for the N-2 data.

Figure 35 on page 152 displays the current host writes being collected by the capture delta sets on the primary Symmetrix systems. The primary Symmetrix systems transmit delta sets continue to send the data to the secondary Symmetrix system’s receive delta sets. The apply delta sets continue to restore or mark the data write pending to the secondary Symmetrix system’s R2 devices.

SRDF/A MSC mode delta set switching 151

Page 152: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

152

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 35 MSC capture delta set collects application writes

ApplyN-2

CaptureN

TransmitN-1

CaptureN

TransmitN-1

CaptureN

TransmitN-1

R2N-2

R1N

R1N

R1N

R1N

R2N-2

R2N-2

R2N-2

ReceiveN-1

ApplyN-2

ReceiveN-1

ApplyN-2

ReceiveN-1

ICO-IMG-000173Primary Symmetrix Secondary Symmetrix

1

1

1. Capture delta set (DS) collectsapplication write I/O

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 153: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 36 shows that SRDF transfer between the primary Symmetrix system transmit delta sets and the secondary Symmetrix system receive delta sets is complete.

Figure 36 MSC primary Symmetrix system transmit delta set cycle is emptied

ApplyN-2

CaptureN

Transmit

CaptureN

Transmit

CaptureN

Transmit

R2N-2

R1N

R1N

R1N

R1N

R2N-2

R2N-2

R2N-2

ReceiveN-1

ApplyN-2

ReceiveN-1

ApplyN-2

ReceiveN-1

ICO-IMG-000174Primary Symmetrix Secondary Symmetrix

1

2

2

2

1

2. Primary Transmit DS completea) Primary sends “transmit complete”

message to Secondary

1. Capture delta set (DS) collectsapplication write I/O

SRDF/A MSC mode delta set switching 153

Page 154: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

154

Understanding SRDF/A and SRDF/A MSC Consistency

The primary Symmetrix systems halt the SRDF transfer (Figure 37) and send a “transmit complete” message to the secondary Symmetrix systems. The secondary Symmetrix systems store the information used during cleanup (if SRDF/A MSC drops), and send an acknowledgement back to the primary Symmetrix systems.

Figure 37 MSC primary Symmetrix system halts the SRDF transfer

ApplyN-2

CaptureN

Transmit

CaptureN

Transmit

CaptureN

Transmit

R2N-2

R1N

R1N

R1N

R1N

R2N-2

R2N-2

R2N-2

ReceiveN-1

ApplyN-2

ReceiveN-1

ApplyN-2

ReceiveN-1

ICO-IMG-000175Primary Symmetrix Secondary Symmetrix

1

2

2

2

1

b) Primary waits for acknowledgement from Secondary

c) SRDF transfer halted

2c

2c

2c

2. Primary Transmit DS completea) Primary sends “transmit complete”

message to Secondary

1. Capture delta set (DS) collectsapplication write I/O

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 155: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 38 shows how the secondary Symmetrix system’s apply delta set completes the restore process by marking the data write pending to the R2 devices. When it is finished, the secondary Symmetrix systems send a “restore complete” message to the primary Symmetrix systems.

Figure 38 MSC secondary apply delta set restore complete

Once the primary Symmetrix systems receive the restore complete message from the secondary Symmetrix systems, the primaries respond to polls from the SRDF/A MSC host software with a “ready to switch” condition. The SRDF/A MSC host software initiates a primary Symmetrix system cycle switch once all of the participating SRDF sessions in the SRDF/A MSC configuration report a “ready-to-switch” state.

ApplyCaptureN

Transmit

CaptureN

Transmit

CaptureN

Transmit

R2N-2

R1N

R1N

R1N

R1N

R2N-2

R2N-2

R2N-2

ReceiveN-1

Apply

ReceiveN-1

Apply

ReceiveN-1

ICO-IMG-000176Primary Symmetrix Secondary Symmetrix

1

2

2

2

3

1 3

2c

2c

2c

3. Secondary completes Apply DS (N - 2) restore (data marked write pending to the R2 devices)

a) Secondary sends Primary “restore complete” message

b) Primary waits for acknowledgement from Secondary

c) SRDF transfer halted

2. Primary Transmit DS completea) Primary sends “transmit complete”

message to Secondary

1. Capture delta set (DS) collectsapplication write I/O

SRDF/A MSC mode delta set switching 155

Page 156: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

156

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 39 displays the primary Symmetrix system cycle switch between the capture and transmit delta set. The SRDF/A MSC host software coordinates the cycle switch. Writes are deferred long enough for the host software to coordinate the cycle switch across all SRDF sessions in the primary Symmetrix systems.

Figure 39 MSC primary Symmetrix system cycle switch/writes are deferred

ApplyCaptureN

TransmitN

CaptureN

TransmitN

CaptureN

TransmitN

R2N-2

R1N

R1N

R1N

R1N

R2N-2

R2N-2

R2N-2

ReceiveN-1

Apply

ReceiveN-1

Apply

ReceiveN-1

ICO-IMG-000177

Primary Symmetrix Secondary Symmetrix

1

2

2

2

5a

5a

5a

3

1 3

2c

2c

2c

3. Secondary completes Apply DS (N - 2) restore (data marked write pending to the R2 devices)

a) Secondary sends Primary “restore complete” message

4. At next host poll, Primary will respond

“ready to switch” (Transmit complete and Apply restore complete both true)

5. “Switch/Open ” receive from hosta) Primary cycle switch occurs while I/O

deferred – Capture DS becomes the

Transmit DS

b) Primary waits for acknowledgement from Secondary

c) SRDF transfer halted

2. Primary Transmit DS completea) Primary sends “transmit complete”

message to Secondary

1. Capture delta set (DS) collectsapplication write I/O

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 157: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 40 shows that the primary Symmetrix systems reconnect for the disconnected I/O, and new capture delta sets accept the host writes. The transmit delta sets contain the N-1 copy of dependent-write consistent data.

Figure 40 Writes are released/new capture delta set accepts host writes

ApplyCaptureN

TransmitN-1

CaptureN

TransmitN-1

CaptureN

TransmitN-1

R2

R1N

R1N

R1N

R1N

R2

R2

R2

ReceiveN-2

Apply

ReceiveN-2

Apply

ReceiveN-2

ICO-IMG-000178

Primary Symmetrix Secondary Symmetrix

1

2

2

25b

5b

5b

5b 5a

5a

5a

3

1 3

2c

2c

2c

3. Secondary completes Apply DS (N - 2) restore (data marked write pending to the R2 devices)

a) Secondary sends Primary “restore complete” message

4. At next host poll, Primary will respond

“ready to switch” (Transmit complete and Apply restore complete both true)

5. “Switch/Open” received from hosta) Primary cycle switch occurs while I/O

deferred – Capture DS becomes the

Transmit DS

b) I/O released - New Capture DS available for Host I/O

b) Primary waits for acknowledgement from Secondary

c) SRDF transfer halted

2. Primary Transmit DS completea) Primary sends “transmit complete”

message to Secondary

1. Capture delta set (DS) collectsapplication write I/O

SRDF/A MSC mode delta set switching 157

Page 158: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

158

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 41 shows that the primary Symmetrix systems send a commit message to the secondary Symmetrix systems once the primary Symmetrix systems’ cycle switches occur. After receiving the commit message, the secondary Symmetrix systems perform cycle switches between the receive and apply delta sets.

Figure 41 MSC secondary Symmetrix system cycle switch

ApplyN-2

CaptureN

TransmitN-1

CaptureN

TransmitN-1

CaptureN

TransmitN-1

R2

R1N

R1N

R1N

R1N

R2

R2

R2

ReceiveN-2

ApplyN-2

ReceiveN-2

ApplyN-2

ReceiveN-2

ICO-IMG-000179

Primary Symmetrix Secondary Symmetrix

1

2

2

25b

6a

6a

6a

5b

5b

5b 5a

5a

5a

3

1 3

2c

2c

2c

3. Secondary completes Apply DS (N - 2) restore (data marked write pending to the R2 devices)

a) Secondary sends Primary “restore complete” message

4. At next host poll, Primary will respond

“ready to switch” (Transmit complete and Apply restore complete both true)

5. “Switch/Open” received from hosta) Primary cycle switch occurs while I/O

deferred – Capture DS becomes the Transmit DS

b) I/O released - New Capture DS available for Host I/O

b) Primary waits for acknowledgement from Secondary

c) SRDF transfer halted

2. Primary Transmit DS completea) Primary sends “transmit complete”

message to Secondary

1. Capture delta set (DS) collectsapplication write I/O

c) Primary sends Secondary commit

6. Secondary receives commit message from Primarya) Secondary cycle switch – Recieve DS

becomes Apply DS

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 159: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 42 shows that the secondary Symmetrix systems now have new receive delta sets available.

Figure 42 MSC secondary new receive delta set is available

ApplyN-2

CaptureN

TransmitN-1

CaptureN

TransmitN-1

CaptureN

TransmitN-1

R2

R1N

R1N

R1N

R1N

R2

R2

R2

Receive

ApplyN-2

Receive

ApplyN-2

Receive

b) New Receive DS available for SRDF transfer

ICO-IMG-000180

Primary Symmetrix Secondary Symmetrix

1

2

2

25b

6a

6a

6a

6b

6b

6b

5b

5b

5b 5a

5a

5a

3

1 3

2c

2c

2c

3. Secondary completes Apply DS (N - 2) restore (data marked write pending to the R2 devices)

a) Secondary sends Primary “restore complete” message

4. At next host poll, Primary will respond

“ready to switch” (Transmit complete and Apply restore complete both true)

5. “Switch/Open” received from hosta) Primary cycle switch occurs while I/O

deferred – Capture DS becomes the Transmit DS

b) I/O released - New Capture DS available for Host I/O

b) Primary waits for acknowledgement from Secondary

c) SRDF transfer halted

2. Primary Transmit DS completea) Primary sends “transmit complete”

message to Secondary

1. Capture delta set (DS) collectsapplication write I/O

c) Primary sends Secondary commit

6. Secondary receives commit message from Primarya) Secondary cycle switch – Recieve DS

becomes Apply DS

SRDF/A MSC mode delta set switching 159

Page 160: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

160

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 43 shows that the SRDF transfer process begins from the primary Symmetrix systems to the secondary Symmetrix systems.

Figure 43 MSC primary Symmetrix systems begin SRDF transfer

ApplyN-2

CaptureN

TransmitN-1

CaptureN

TransmitN-1

CaptureN

TransmitN-1

R2

R1N

R1N

R1N

R1N

R2

R2

R2

ReceiveN-1

ApplyN-2

ReceiveN-1

ApplyN-2

ReceiveN-1

c) SRDF transfer beginsICO-IMG-000181

Primary Symmetrix Secondary Symmetrix

1

2

2

25b

6a

6a

6a

6b

6b

6b

5b

5b

5b 5a

5a

5a

3

1 3

2c, 6c

2c, 6c

2c, 6c

b) New Receive DS available for SRDF transfer

3. Secondary completes Apply DS (N - 2) restore (data marked write pending to the R2 devices)

a) Secondary sends Primary “restore complete” message

4. At next host poll, Primary will respond

“ready to switch” (Transmit complete and Apply restore complete both true)

5. “Switch/Open” received from hosta) Primary cycle switch occurs while I/O

deferred – Capture DS becomes the Transmit DS

b) I/O released - New Capture DS available for Host I/O

b) Primary waits for acknowledgement from Secondary

c) SRDF transfer halted

2. Primary Transmit DS completea) Primary sends “transmit complete”

message to Secondary

1. Capture delta set (DS) collectsapplication write I/O

c) Primary sends Secondary commit

6. Secondary receives commit message from Primarya) Secondary cycle switch – Recieve DS

becomes Apply DS

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 161: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 44 shows that the secondary Symmetrix systems also begin the apply delta set restore process and the cycle switch process, described earlier, starts again.

Figure 44 Secondary Symmetrix systems begin the apply delta set restore

ApplyN-2

CaptureN

TransmitN-1

CaptureN

TransmitN-1

CaptureN

TransmitN-1

R2N-2

R1N

R1N

R1N

R1N

R2N-2

R2N-2

R2N-2

ReceiveN-1

ApplyN-2

ReceiveN-1

ApplyN-2

ReceiveN-1

b) New Receive DS available for SRDF transfer

d) Begin Secondary Apply DS restore

c) SRDF transfer begins

ICO-IMG-000182

Primary Symmetrix Secondary Symmetrix

1

2

2

25b

6a

6a

6a

6b

6d

6d

6b

6b

5b

5b

5b 5a

5a

5a

1 1. Capture DS collects application write I/O

2. Primary Transmit DS completea) Primary sends Secondary ‘Transmit

complete’ message

b) Primary waits for acknowledgement from Secondary

c) SRDF transfer halted2c, 6c

2c, 6c

2c, 6c

3. Secondary completes Apply DS (N - 2) restore (data marked write pending to the R2 devices)

a) Secondary sends Primary “restore complete” message

4. At next host poll, Primary will respond

“ready to switch” (Transmit complete and Apply restore complete both true)

5. “Switch/Open ” receive from hosta) Primary cycle switch occurs while I/O

deferred – Capture DS becomes the

Transmit DS

b) I/O released - New Capture DS available for Host I/O

6. Secondary receives commit message from Primarya) Secondary cycle switch – Receive DS

becomes Apply DS

SRDF/A MSC mode delta set switching 161

Page 162: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

162

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A MSC session cleanup process When SRDF/A MSC is deactivated, all participating sessions go back to single session mode. When SRDF/A is dropped while in MSC mode, each primary Symmetrix system undertakes the same cleanup process as single session mode: it discards all I/O from both transmit and capture delta sets and marks the corresponding tracks owed to the secondary Symmetrix system.

The host software does not need to perform any special recovery on the primary Symmetrix system.

Enginuity at the secondary Symmetrix system completes the restore of its apply delta set automatically. For each SRDF session, Enginuity discards any receive delta sets that are not complete. If the receive delta set is a complete delta set for each SRDF session, Enginuity marks it as “needing cleanup” in cache, pending a decision from the host software.

The SRDF/A MSC host software uses cycle tags during recovery of the receive delta sets on the secondary Symmetrix system. There are three different scenarios to be considered:

1. All receive delta sets on all secondary Symmetrix systems and SRDF sessions have the same tag and are marked as “needing cleanup”. The designation “needing cleanup” is an Enginuity marking which states that the receive delta set is complete. It is the result of the secondary Symmetrix system receiving and acknowledging the ‘transmit complete” message in step 2 of the SRDF/A MSC cycle switch process.

In this case, the host software may choose to either commit or discard all of the receive delta sets. The default behavior is to commit all of the receive delta sets. This ensures that the most current dependent-write consistent data is written to the secondary Symmetrix system devices.

2. All receive delta sets on all Symmetrix systems have the same tag number, but at least one Symmetrix system or SRDF/A session does not have a receive delta set marked “needing cleanup.” The latter implies that Enginuity discarded an incomplete receive delta set.

In this case the SRDF/A MSC host software must discard all receive delta sets for this tag number since the most current data is already on the secondary Symmetrix system devices through

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 163: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

the apply delta sets. The data contained in the discarded receive delta sets is marked as tracks owed to the primary Symmetrix system devices.

3. Different cycle tags exist within the apply and receive delta sets.

In this case, the secondary Symmetrix systems can be divided into two groups. The first group has Symmetrix systems with apply delta set cycle tags that match the receive delta set cycle tags of the Symmetrix systems from the second group. In other words, Symmetrix systems from the first group received the commit message for a certain host cycle, while the Symmetrix systems from the second group did not. In this case, the receive cycles of the Symmetrix systems from the second group are complete and the host software must force their restore. At the same time, host software must discard the receive delta sets of the Symmetrix systems from the first group regardless of their completeness.

SRDF/A MSC session cleanup process 163

Page 164: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

164

Understanding SRDF/A and SRDF/A MSC Consistency

Using TimeFinder to create a restartable copyEMC’s consistency technology can be used to create a dependent-write consistent copy of data which can be used for restart purposes. Some database vendors also allow that copy to be used for roll forward recovery. This section discusses creating the restartable copy either locally (primary site) or remotely (secondary site) using TimeFinder.

Creating local restartable copies — primary site

When running SRDF/A, or SRDF/A MSC, the use of TimeFinder split or consistent activate does not change. The process still uses Enginuity Consistency Assist to defer the write I/Os while the split or activate is occurring. This creates a dependent-write consistent image of the data on the BCVs or clones.

TimeFinder consistent split using BCVs — primary siteA dependent-write consistent image can be captured on BCVs using the TimeFinder split by following the standard procedure of performing a full establish of the BCVs to an SRDF/A or SRDF/A MSC session.

TimeFinder consistent split using clone emulation mode — primary siteWhen creating a dependent-write consistent image using clone emulation mode, the devices are defined in the Symmetrix system as BCVs; RAID-5 devices can also be used. Traditional TimeFinder/Mirror commands are used when working with these devices; however, the host software translates those commands to TimeFinder/Clone commands.

TimeFinder consistent activate using native clones — primary siteTimeFinder/Clone devices may be defined in the Symmetrix system as BCVs or standard devices.

When initiating a clone copy, it is recommended that pre-copy be performed to avoid the copy on first write penalty. This process will perform an initial copy of the devices prior to allowing the clones to be used independently of the standard device. Some environments are not affected by this penalty as it is application specific.

At this point, a dependent-write consistent image can be captured on a set of clone devices at any time.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 165: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Create remote restartable copies — secondary siteWhen a dependent-write consistent image is needed at the secondary Symmetrix system of either an SRDF/A or SRDF/A MSC configuration, the processes of TimeFinder splits and activates changes slightly. While command structures do not change, the code understands that a simple defer of the I/O is not enough to maintain the dependent-write consistency.

When a primary SRDF/A remote consistent split or activate occurs and SRDF/A or SRDF/A MSC is active, the consistency is ensured by suspending the delta set cycle switching while the set of BCVs or clones are split or activated from the dependent-write consistent apply delta set. This image is an N-2 image of the primary Symmetrix system data. This means if the cycle is switching every 30 seconds, this image is 60 seconds older than when the command was issued. The image is dependent-write consistent by the nature of SRDF/A or SRDF/A MSC.

TimeFinder consistent split using BCVs — secondary siteWhen using BCVs, follow the standard procedure of performing a full establish of the BCVs to the SRDF/A or SRDF/A MSC group. At this point a dependent-write consistent image can be captured on the BCV copies at any time using consistent split. Note that this process suspends SRDF/A or SRDF/A MSC delta set cycle switching while a copy is created of the apply delta set. This image is an N-2 image of the primary Symmetrix system at the time the command was issued.

TimeFinder split using clone emulation mode — secondary siteWhen using clone emulation mode, the Symmetrix system devices are defined in the Symmetrix system as BCVs; RAID-5 devices can also be used. Traditional TimeFinder/Mirror commands are used when working with these devices, but the host software translates the command to TimeFinder/Clone commands.

TimeFinder activate using native clones — secondary siteTimeFinder/Clone devices may be defined in the Symmetrix system as BCVs or standard devices.

When initiating a clone copy, it is recommended that pre-copy be performed to avoid the copy on first write penalty. This process will perform an initial copy of the devices prior to allowing the clones to be used independently of the standard device. Some environments

Using TimeFinder to create a restartable copy 165

Page 166: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

166

Understanding SRDF/A and SRDF/A MSC Consistency

are not affected by this penalty as it is application specific. At this point, a dependent-write consistent image can be captured on the set of clone devices at any time.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 167: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Establish and using SRDF/A with Cascaded SRDFThis section describes establishing and using SRDF/A with Cascaded SRDF.

Overview and introduction

Prior to Enginuity 5773, an SRDF device could be a primary device (R1 device) or a secondary device (R2 device); however it could not be in the dual role simultaneously. Cascaded SRDF is a new three site disaster recovery configuration where data from a primary site is synchronously replicated to a secondary site, and then asynchronously replicated to a tertiary site.

Cascaded SRDF introduces a new SRDF R21 device. The R21 device will assume dual roles of primary (R1) and secondary (R2) device types simultaneously. Data received by this device as a secondary can automatically be transferred by this device as a primary (according to the possible modes).

A basic Cascaded SRDF configuration consists of a primary or workload site (site A) replicating to a secondary site (site B) and then replicating the same data to a tertiary site (site C).

Figure 45 Cascaded SRDF architecture

Figure 45 displays the secondary site B device as labeled R21. This device is the R2 mirror of the workload site A device, and the R1 mirror of the tertiary site C device. Site A and site B have an RDF pair state; site B and site C have an RDF pair state. These two pair states are separate from each other, but each must be considered when performing a control operation on the other pair.

The key benefits of Cascaded SRDF are:

◆ Site can be geographically dispersed.

Workload site A Secondary site B Tertiary site C

ICO-IMG-000400

R1 R2R21

Establish and using SRDF/A with Cascaded SRDF 167

Page 168: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

168

Understanding SRDF/A and SRDF/A MSC Consistency

◆ Cascaded SRDF can span multiple SRDF groups and Symmetrix system arrays.

◆ Faster recovery times at the tertiary site, which is enabled by the capability to continue replicating from the secondary site to the tertiary site if the primary site goes down.

◆ Facilitate and assist customers in achieving less than a 2-hour RTO.

◆ Zero data loss achievable up to the point of the primary site failure, fault or disaster event at the secondary or tertiary sites.

◆ Reduced number of BCV copies required in comparison to SRDF/AR. Tightly integrated with the TimeFinder Product Family.

◆ Management capability provided via current Storage Management Portfolio of products (via SMC and EMC Ionix™ ControlCenter®). Does not require the purchase of additional new management software products.

Revised SRDF relationships for Cascaded SRDFThe introduction of a Cascaded SRDF relationship means that the commonly used mechanism of viewing an SRDF device as either an R1 device or an R2 device needs to change since the R21 device serves both purposes. Changing the way we view an SRDF device to be context, or mirror based, rather than device type based will allow the customer to more easily understand the changes that have been made to Solutions Enabler.

Throughout SRDF Host Component interfaces, an R21 device may be viewed based on the relationship that is being queried or controlled. For example, when working with the R1->R21 relationship, the R21 device acts and is managed as if it were an R2. When working with the R21->R2 relationship, the R21 will be acting as if it is managed as an R1 device.

When querying or controlling an R1 device that is participating in a Cascading SRDF relationship, the terms first hop and second hop are used for R1->R21 and R21->R2 relationships respectively. This is also true when controlling an R2 device that is participating in a Cascaded SRDF relationship, but the first hop represents the R2->R21 relationship and the second hop represents the R21->R1 relationship.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 169: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

When performing control operations against one pair relationship, the state of the other pair relationship determines whether the operation is allowed. Controls are allowed from hosts connected to either the Symmetrix system containing the R1 device, the Symetrix system containing the R21 device, or the Symmetrix system containing the R2 device.

In Figure 46 the location of the Tertiary Site C devices are dependent on the location of the controlling host. The controlling host is located at Workload site A, therefore a control operation using a RMT(GK,RDFGRP1.RDFGRP2)specific reference acts on the devices in the Symmetrix system at Tertiary Site C.

Figure 46 Query or control references for hop-2 devices are based on the workload location

In this example, the controlling request has been initiated at workload site A, therefore a control operation using a LCL(GK,RDFGRP1) reference will act on the local devices at workload site A, a RMT(GK,RDFGRP1) reference will act on the devices at secondary site B, and a RMT(GK,RDFGRP1.RDFGRP2) reference will act on the devices in the Symmetrix at tertiary site C.

Supported SRDF modes and general restrictionsThis section lists supported SRDF modes and general restrictions.

Valid SRDF modesSRDF currently supports the following modes of operation in a Cascaded SRDF environment:

◆ Synchronous mode (SRDF/S) - provides real-time mirroring of data between the source Symmetrix system(s) and the target Symmetrix system(s). Data is written simultaneously to the cache of both systems in real time before the application I/O is

Workload site A Secondary site B Tertiary site C

ICO-IMG-000407

R1 R2R21

LCL(9E00,04) RMT(9E00,04) RMT(9E00,04,54)

SRDF/S SRDF ADCOPY-DISK04

28

38

28

3854

40

6444

Establish and using SRDF/A with Cascaded SRDF 169

Page 170: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

170

Understanding SRDF/A and SRDF/A MSC Consistency

completed, thus ensuring the highest possible data availability. Data must be successfully stored in both the local and remote Symmetrix systems before an acknowledgment is sent to the local host. This mode is used mainly for metropolitan area network distances less than 200km.

◆ Asynchronous mode (SRDF/A) - maintains a dependent-write consistent copy of data at all times across any distance with no host application impact. Applications needing to replicate data across long distances historically have had limited options. SRDF/A delivers high-performance, extended-distance replication and reduced telecommunication costs while leveraging existing management capabilities with no host performance impact.

◆ Adaptive copy mode - transfers data from source devices to target devices regardless of order or consistency, and without host performance impact. This is especially useful when transferring large amounts of data during data center migrations, consolidations, and in data mobility environments.

Similar to other advanced SRDF relationships, the modes supported for each hop of a Cascaded SRDF configuration are based upon the current state of the device in question and SRDF links.

A basic Cascaded SRDF configuration consists of a primary or workload site (site A) replicating synchronously to a secondary site (site B) with SRDF/S, and then replicates the same data asynchronously to a tertiary site (site C) with SRDF/A.

Figure 47 Basic Cascaded SRDF configuration

In Figure 47, the link from workload site A to secondary site B is in SRDF/S or synchronous mode and the link from secondary site B to tertiary site C is in SRDF/A or asynchronous mode. This

R1 R2

Workload site A

SynchronousHost I/O

Secondary site B Tertiary site C

AsynchronousR21

ICO-IMG-000402

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 171: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

configuration represents a typical or best practice implementation of Cascaded SRDF; however other SRDF mode combinations are also valid as shown in Figure 48 and listed in Table 5.

Figure 48 Cascaded SRDF mode combination diagram

Use of Adaptive Copy mode on the first leg will cause loss of consistency for SRDF/A operating on the second leg.

Table 5 Valid Cascaded SRDF mode combinations

Hop 1:Site A to site B (R1 R21) Hop-2: Site B to site C (R21 R2)

Synchronousa

a. Recommended Hop-1 SRDF mode of operation

Asynchronousb

b. Recommended Hop-w SRDF mode of operation

Adaptive Copy disk Asynchronous

Adaptive Copy WP Asynchronous

Synchronous Adaptive Copy disk

Asynchronous Adaptive Copy disk

Adaptive Copy WP Adaptive Copy disk

Adaptive Copy disk Adaptive Copy disk

- SRDF/S Synchronous- Adaptive Copy Disk- Adaptive Copy Write-Pending- SRDF/A (if not set on Hop-2)

- Adaptive Copy Disk- SRDF/A (if not set on Hop-1)

R1 R2

Workload site A Secondary site B Tertiary site C

R21

ICO-IMG-000401

Establish and using SRDF/A with Cascaded SRDF 171

Page 172: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

172

Understanding SRDF/A and SRDF/A MSC Consistency

Limitations and restrictionsConsider the following limitations and restrictions.

General (non-interface specific) limitationsThe following is a current list of general Cascaded SRDF limitations and constraints:

◆ The secondary site B (where the R21 device resides) requires Enginuity 5773, Solutions Enabler 6.5 or SRDF Host Component 5.6. The primary/workload site A and tertiary site C systems can run on 5671, 5772, or 5773.

◆ Workload site A and tertiary site C Symmetrix systems will require Enginuity 5x71 or greater to support SRDF/A Multi-Session Consistency (MSC).

◆ R21 device cannot be paired with another R21 device.

◆ R21 devices cannot be BCV devices.

◆ R21 devices are only supported on GigE and Fiber RAs.

◆ PPRC devices cannot be R21 devices.

◆ R21 thin devices are not supported.

◆ The first hop will support all SRDF modes of operation, with the exception of SRDF/A if it is currently utilized on the second hop.

◆ The second hop will support either SRDF/A or Adaptive Copy Disk Mode, with the exception of SRDF/A if it is currently utilized on the first hop.

◆ SRDF Host Component will continue to discover Symmetrix systems that are at most two hops away.

◆ There is no support for controlling or creating a single SRDF relationship containing both Concurrent and Cascaded components.

z/OS specific limitationsThe following restrictions and limitations apply to R21 devices:

◆ R21 supported only at the 5773 level.

◆ Workload site A and tertiary site C systems can be at older release levels.

◆ R21-only supported on GigE and Fibre RA (no ESCON).

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 173: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

◆ R21- only paired with an R1 and R2.

◆ You cannot chain R1R21R2.

◆ R21 cannot be a BCV device.

◆ R21 cannot be a PPRC or XRC device.

◆ R21 device cannot have one static and one dynamic SRDF mirror.

◆ Both must be static or dynamic.

Initial best practices for Cascaded SRDFAs a new Enginuity feature, Cascaded SRDF best practices are in the process of being developed based on continued experience with the product in the lab as well as with customers in the field. The list of Cascaded SRDF best practices is in no way definitive and will continue to develop based on these experiences:

◆ A basic Cascaded SRDF configuration is recommended and should consist of a primary or workload site (site A) replicating synchronously to a secondary site (site B) with SRDF/S, and then replicating the same data asynchronously to a tertiary site (site C) with SRDF/A.

◆ The first hop of a Cascaded SRDF configuration should have Consistency Group (ConGroup) enabled with synchronous SRDF mode.

◆ SRDF/A MSC should be enabled on the secondary leg with a host controlling cycle switching from a Primary, Secondary or Tertiary site.

◆ R21 will have local mirrors using Mirrored, RAID 5, RAID 6, or standalone device protection. However, standalone or unprotected R21 devices are not recommended due to possible impact on replication due to drive failures.

◆ Separate SRDF directors for the incoming and outgoing SRDF groups on the R21 are recommended.

◆ R21 device cannot have one static and one dynamic SRDF mirror. Configure either static or dynamic on both mirrors, not a mixture.

◆ The use of SRDF/A Reserve Capacity is fully supported and recommended with Cascaded SRDF.

Establish and using SRDF/A with Cascaded SRDF 173

Page 174: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

174

Understanding SRDF/A and SRDF/A MSC Consistency

◆ For the second hop, the R21 device should not be in a Consistency Group because SRDF synchronous mode is not supported on the second hop.

◆ A “gold copy” BCV device at the tertiary site C is recommended should re-synchronization become necessary following a network or link outage.

Changes to the Host Component z/OS interface

For additional details on the Cascaded SRDF interface for z/OS, refer to the EMC SRDF Host Component for z/OS Version 5.6 Product Guide available on EMC Powerlink.

Initial packaging and licensingThe initial release of Cascaded SRDF was packaged with Enginuity release 5773 and Solution Enabler v6.5. It requires a license at each site in a Cascaded SRDF configuration. Other applicable SRDF family licenses for the Cascaded SRDF implementation also apply.

Host Component change summaryThroughout SRDF Host Component 5.6 and later versions, an R21 device may be viewed based on the relationship that is being queried or controlled. For example, when working with the R1R21 (read as R1 to R21) relationship, the R21 device will be acting, and will be managed as if it were an R2. When working with the R21R2 relationship, the R21 will be acting, and will be managed as if it were an R1 device.

Configuring a Cascaded SRDF configuration is a two-step process: 1) Establish R1R2 pairs between workload site A and secondary site B (or secondary site B and tertiary site C) and 2) Establish R1R21 pairs between workload site A and secondary site B (or R21R2 between secondary site B and tertiary site C).

Cascaded SRDF changes the way we view an SRDF volume to be context, or mirror based, rather than device based. SRDF Host Component now supports setting both Cascaded and Concurrent SRDF environments using the standard SRDF Host Component SC command syntax. The following assumes that the synchronous SRDF relationship to the secondary DMX has already been established:

SC VOL,RMT (GK, localRDFGRP#,bunkerRDFGRP#),

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 175: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

CREATEPAIR (ADCOPY-DISK),start-bunkersymdv-end-bunkersymdev#,B-site-start-symdev#

There have been relatively few host component changes required to implement Cascaded SRDF; the SRDF commands needed to do multiple hops have been in the syntax and were used for SRDF/SAR. The only real change in syntax is for describing the SRDF/A MSC session, since it is now controlled remotely. The syntax has been changed as follows:

MSC INCLUDE SESSION=ccuu, (localrdfgrp,remoterdfgrp)All other changes are done with Enginuity microcode to allow Concurrent SRDF for an R2 device and all associated operations that this entails.

Cascaded SRDF/Star support for z/OS

The following sections pertaining to the Cascaded SRDF/Star support for z/OS are included for introductory purposes only, and are not intended to replace EMC product specific documentation.

Note: For additional details on the Cascaded SRDF z/OS STAR interface, refer to the EMC SRDF Host Component for z/OS Version 5.6 Product Guide (P/N 300-000-163) available on Powerlink.

Cascaded SRDF/Star introductionSRDF/Star is a data protection and failure recovery solution that covers three geographically dispersed data centers in a triangularly topology. SRDF/Star configures its three sites to protect business data against a primary site failure or a regional disaster, using concurrent RDF capability to mirror the same production data synchronously to one remote site and asynchronously to another remote site.

◆ The workload site of the SRDF/Star topology is the primary data center where the production workload is running.

◆ The sync target site is a secondary site usually located in the same region as the workload site. The production data is mirrored to this site using synchronous replication.

◆ The async target site is a secondary site in a distant location. The production data is mirrored to this site using asynchronous replication.

Establish and using SRDF/A with Cascaded SRDF 175

Page 176: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

176

Understanding SRDF/A and SRDF/A MSC Consistency

In the event of a workload site failure, there would be no data loss because of the synchronous replication to the regional site. In the event of a regional disruption that knocked out both the workload site and the sync target site, SRDF/Star’s concurrent SRDF setup ensures that there would be only a minimal data loss because of the asynchronous replication to the more distant async target site.

A major benefit of using SRDF/Star for failure recovery is that you can quickly establish communication and protection between the two remote sites, either of which can become the new workload site. SRDF/Star allows you to incrementally establish an asynchronous session between the two remote sites, thus avoiding a full and time-consuming resynchronization to re-enable disaster recovery protection. Incremental resynchronization (replicating only the data differences between the synchronous and asynchronous sites) dramatically reduces the time required to establish remote mirroring and protection for a new workload site following a primary site failure.

Another SRDF/Star benefit is that it allows the coordination of consistency groups to the two remote sites, meaning that devices within a consistency group act in unison to preserve dependent-write consistency of a database that may be distributed across multiple SRDF systems. SRDF/Star also allows you to determine which remote target site (sync or async) has the most current data in the event of a rolling disaster that affects the workload site. With a rolling disaster, there is no guarantee that the sync site will be more current than the async site. The capability to display where the most current data is located helps determine which site’s data should be used for failure recovery.

A Cascaded SRDF/Star configuration has an SRDF/S (Synchronous) relationship between the workload site and the short distance target site, and an SRDF/A (Asynchronous) relationship between the short distance target site and the long distance target site.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 177: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Figure 49 illustrates, at a high level, Cascaded SRDF/Star configuration under normal operation with workload site at site A.

Figure 49 Cascaded SRDF/Star configuration under normal operation

ICO-IMG-000412

Near city (B)

R21

Source (A)

R1

Far city (C)

R2SRDF/A

SRDF/S SRDF/A

Establish and using SRDF/A with Cascaded SRDF 177

Page 178: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

178

Understanding SRDF/A and SRDF/A MSC Consistency

SRDF/A with SRDF/Extended Distance Protection Previously, EMC introduced a new replication capability for Cascaded SRDF that supported a three-site disaster recovery configuration. The core benefit behind a "cascaded" configuration is its inherent capability to continue replication with minimal user intervention from the secondary site to a tertiary site with SRDF/A in the event that the primary site goes down. This enabled a faster recovery at the tertiary site, provided that is where the customer is looking to restart the operation.

Available with Enginuity 5874, SRDF/Extended Distance Protection (SRDF/EDP) is a new two-site disaster restart solution that enables customers the ability to achieve no data loss at an out-of-region site at a lower cost. Using cascaded SRDF as the building block for this solution, combined with the use of the new diskless R21 data device at an intermediate (pass-through) site Symmetrix system, provides data pass through to the out-of-region site using SRDF/A.

As with cascaded SRDF, an SRDF/EDP configuration consists of a primary site (site A) replicating synchronously to a secondary site (site B) with SRDF/S, and then replicating the same data asynchronously to a tertiary site (site C) with SRDF/A.

An R21 device has its own local mirrors so there are three full copies of data, one at each of the three sites. In contrast, the diskless R21 device has no local disk space allocated to store the user data, therefore it reduces the cost of having disk storage in the secondary (R21) Symmetrix system.

The purpose of a diskless R21 device is to cascade data to the R2 device. When using a diskless R21 device, the changed tracks received on the R2 mirror are saved in cache until these tracks are sent to the R2 device. Once the data is sent to the R2 device and the receipt is acknowledge, the cache slot is freed and the data no longer exists on the R21 Symmetrix.

SRDF/EDP is for customers who are looking for a two-site DR solution with the ability to achieve a zero Recovery Point Objective (RPO) in the event of a primary site failure. To date, customers looking to establish a two-site disaster recovery configuration with a zero RPO were bound by distance limitations due to latency and application performance (Synchronous Type Replication). Also, if the

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 179: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

business called for an extended distance replication solution (Asynchronous Type Replication) they would have to compromise with some level of data loss (secs/mins).

To achieve the best of both worlds, some customers had to opt for three-site configurations such as concurrent SRDF or cascaded SRDF with SRDF/Star for the ability for extended distance replication and zero RPOs at the tertiary site, even though they did not need three copies of their data. SRDF/EDP supports an RPO between the zero RPO of SRDF/S and seconds to minutes of SRDF/A, offering a more cost effective, optimal solution to a three-site DR configuration.

Requirements and dependencies

SRDF/EDP and associated diskless devices are supported only on the Symmetrix VMAX hardware platforms with Enginuity 5874 and later. However, only the Symmetrix that contains the diskless SRDF R21 device is required to be Enginuity 5874. The primary and the tertiary site systems are required to be at Enginuity code level 5773 or 5874. Customers are reminded that if a failover action will result in the primary or tertiary site being configured as a new "secondary" site (where diskless R21s are configured), then Enginuity 5874 will be required to run on those sites as well. Please see the "Current Limitations and Restrictions" sections for additional information.

Current limitations and restrictions◆ The secondary site (where the R21 device resides) requires

Enginuity 5874 and SRDF Host Component v5.7.

◆ The primary and tertiary sites systems can run on Enginuity 5773, or Enginuity 5874.

◆ Sites A and C Symmetrix will require Enginuity 5x73 or greater to support SRDF/A MSC.

◆ R21 devices cannot be BCV devices.

◆ R21 devices supported only on GigE and Fiber RAs.

◆ PPRC devices cannot be R21 devices.

◆ XRC devices cannot be R21 devices.

◆ R21 thin devices are not supported.

SRDF/A with SRDF/Extended Distance Protection 179

Page 180: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

180

Understanding SRDF/A and SRDF/A MSC Consistency

◆ The first hop will support all SRDF modes of operation, with the exception of SRDF/A if it is currently utilized on the second hop.

◆ The second hop will support either SRDF/A or Adaptive Copy Write Pending Mode, with the exception of SRDF/A if it is currently utilized on the first hop. The second hop will not support SRDF/S.

◆ No mix of dynamic and static SRDF relationships for SRDF/EDP devices.

◆ No mix of SRDF/EDP R21 devices and legacy cascaded R21 devices

◆ ResourcePak Base and SRDF Host Component will continue to discover Symmetrix arrays that are at most an additional two hops away.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 181: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Mainframe Enabler 7.0 (SRDF Host Component 7.0) changesWith Mainframe Enabler 7.0 (SRDF Host Component 7.0), configuring a Cascaded SRDF or SRDF Extended Distance Protection (SRDF/EDP) configuration is now a one-step process: Establish R1, R21 to R2 triples between primary, secondary, and tertiary sites. This is accomplished by utilizing the new composite commands available in SRDF Host Component 7.0. The following commands have been added:

The syntax for these new composite commands are as follows:

#SC VOL,LCL(cuu,rdfgroup#1,rdfgroup#2),CASCRE(flag-list),r1dv-r1dv,r21dv,r2dv

#SC VOL,LCL(cuu,rdfgroup#1),CASSWAP(flag-list),r1dv-r1dv#SC VOL,LCL(cuu,rdfgroup#1),CASDEL(flag-list),r1dv-r1dv#SC VOL,LCL(cuu,rdfgroup#1),CASSUSP,r1dv-r1dv#SC VOL,LCL(cuu,rdfgroup#1),CASRSUM,r1dv-r1dv

For example, Host Component 7.0 now supports setting SRDF/EDP environments by using the new parameters for the SRDF Host Component SC command syntax. The following shows the syntax for creating the SRDF/EDP volume triplets needed to establish the SRDF/EDP environment.

SC VOL,LCL(cuu,rdfgroup#1,rdfgroup#2),CASCRE(flag-list)r1dv-r1dv,r21dv,r2dv

Command Description

CASCRE Creates a cascaded configuration

CASSUSPa Suspends pairs in a cascaded configuration — Issued from R1 only

CASRSUMa Resumes pairs in a cascaded configuration - Issued from R1 only

CASDEL2b Terminates all relationships in a cascaded configuration

CASSWAPb Performs SRDF personality swap on both device pairs

a. Must be used for diskless R21.

b. Requires devices to be suspended first.

Mainframe Enabler 7.0 (SRDF Host Component 7.0) changes 181

Page 182: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

182

Understanding SRDF/A and SRDF/A MSC Consistency

Other considerations for use of the new composite commands:

◆ Cannot be issued from secondary site B

◆ Default states for Cascaded SRDF are sync (A B) and AD Copy Disk (B C)

◆ Default states for SRDF/EDP are sync (A B) and Ad Copy Write Pending (B C)

◆ Specifying ADCOPY_DISK or ADCOPY flags on CASCRE will affect the A B leg

◆ These modes are implicit for B C link

◆ Message Terminology: 'Environment 1' = A B, 'Environment 2' = B C

There have been relatively few host component changes required to implement SRDF/EDP. The SRDF commands have been updated with new parameters to create, delete, resum, suspend and swap. All other changes are done in microcode to allow concurrent SRDF for a R2 device and all associated operations that this entails.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 183: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

Using SRDF/A write pacingSRDF/A write pacing provides an additional method of extending the availability of SRDF/A by preventing conditions that result in Symmetrix cache exhaustion. Distinctive from the existing mechanisms, SRDF/A write pacing is dynamic. The write pacing feature detects when SRDF/A I/O service rates are lower than host I/O rates and takes corrective action to slow down host I/O rates to match the slower service rate. This includes detecting spikes in host write I/O rates and slowdowns in both transmit and R2-side restore rates. In this way, monitoring and throttling of host write I/O rates can control the amount of cache used by SRDF/A, which prevents the cache from becoming exhausted on both the primary (R1) and secondary (R2) sides.

The SRDF/A write pacing feature offers a group pacing option enabled for the entire SRDF/A group and a device pacing option enabled for an individual SRDF/A R1 volume whose R2 partner on the secondary system participates in TimeFinder operations. Both write pacing options are compatible with each other and with other SRDF/A features such as tunable cache utilization and Reserve Capacity. EMC host-based SRDF software allows users to enable/disable each write pacing option.

The following terms are used when referring to write pacing states:

◆ Enabling — User has chosen to use the feature and it will invoke as necessary based on the Arming and Pacing descriptions below.

◆ Arming — Enginuity has determined that it may need to pace and begins performing calculations necessary to support write pacing. The result of these calculations may be a pacing delay of zero, in which case it will not pace.

◆ Pacing —State occurs when you're "Armed" and Enginuity dictates that pacing is needed. This state occurs when pacing calculations result in a pacing delay of other than zero.

Users can enable/disable each write pacing option. Both write pacing options are compatible with each other and with other SRDF/A features such as Reserve Capacity and Multi Session Consistency (MSC). The key benefit of the SRDF/A write pacing is its self-paced mechanism. Once enabled, write pacing is employed only when needed and applies an appropriate amount of delay to the host write I/O response time required to keep the SRDF/A session running. With Enginuity version 5875, the write pacing feature can extend the

Using SRDF/A write pacing 183

Page 184: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

184

Understanding SRDF/A and SRDF/A MSC Consistency

host write I/O response time up to the maximum response delay value of 1 second with a default delay of of 50 milliseconds. Users can overwrite the default with a user-specified value. The user-specified maximum response time then applies to both write pacing options.

Solutions Enabler and Mainframe Enabler 7.2 provide a new interface to the user to allow them to configure and control the SRDF/A group pacing feature functionality available with Enginuity 5874 and enhanced with device level pacing in Enginuity 5875. Please reference the Solutions Enabler or Mainframe Enabler 7.2 product guides for additional information regarding the use of these features.

SRDF/A group pacingThe group pacing option extends the host write I/O response time for a given SRDF/A group to balance the incoming host write I/O rate with the SRDF link bandwidth and throughput capabilities.

The SRDFA group pacing is useful when:

◆ The host I/O rate exceeds the SRDF link throughput

◆ Some SRDF links that belong to the SRDF/A group are lost

◆ Even though all links may remain up, they may still have reduced bandwidth

SRDF/A group pacing is based on the following user controls:

◆ Maximum Delay — maximum delay in microseconds that write pacing will add to the host response time

◆ Pacing Cache Threshold — % value used of the total number of cache slots available for SRDF/A use; default value is 60%

◆ Pacing DSE Log Pool Threshold — % value of the total number tracks in use by the DSE pool when pacing will be invoked; default value is 90%

If group pacing is active, the host write I/Os are paced as follows:

◆ If users do not specify the maximum delay, the group pacing feature extends the host write I/O response time to match the speed of the SRDF links. By default, the group pacing feature cannot extend host response time by more than 50 milliseconds. The group pacing feature always applies the lowest possible pacing delay while trying to keep the SRDF/A session active. The SRDF/A session will drop if cache utilization conditions require a maximum delay greater than 50 milliseconds.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 185: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Understanding SRDF/A and SRDF/A MSC Consistency

◆ If users specify a maximum delay, the group pacing feature extends the host write I/Os response time using the minimum possible value required to keep the SRDF/A session running, but no greater than the user-specified maximum delay. For example, it the user caps the maximum delay at 10 milliseconds, while cache utilization conditions can be resolved using a pacing delay of 5 milliseconds, the group pacing feature applies the 5 millisecond pacing delay. However, if cache utilization conditions require a pacing delay greater than 10 milliseconds, the SRDF/A session may eventually drop if SRDF/A cache limits are exceeded.

If group pacing and Transmit Idle are both enabled, their behavior depends on when all SRDF links are lost:

◆ If group pacing is active when all SRDF links are lost and, consequently, Transmit Idle is invoked, the host write I/Os continue to be paced.

◆ If all SRDF/A links are lost, Transmit Idle is invoked, and cache usage conditions require that the group pacing option be activated, the host write I/Os are not paced until one or more SRDF links recover.

SRDF/A device pacinThe device pacing technique delays the host write I/O response time for the individual SRDF/A R1 volumes whose R2 counterparts participate in TimeFinder copy sessions.

The device pacing option is designed to mitigate high cache utilization levels when an SRDF/A session is active and TimeFinder copy sessions run on the secondary Symmetrix system. TimeFinder copy sessions create additional copy requests for the SRDF/A R2 volumes that already service the SRDF/A session cycle requests.

Such conditions can lead to SRDF/A operational interruptions. By pacing only the host write I/Os to the individual R1 volumes whose R2 partners participate in TimeFinder operations, the device pacing feature indirectly paces the SRDF/A requests issued to the R2 volumes.

Like the group pacing option, the device pacing option extends the host write I/O response time on the primary Symmetrix system only when required.

Using SRDF/A write pacing 185

Page 186: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

186

Understanding SRDF/A and SRDF/A MSC Consistency

The following must be true before the device pacing option is applied for an R1 volume on the primary Symmetrix system:

◆ TimeFinder copy sessions must be running on the R2 volumes on the secondary Symmetrix system.

If device pacing is active, the host write I/Os are paced as follows:

◆ If users do not specify the maximum delay, the device pacing feature extends the host write I/O response time for individual R1 volumes to balance the workload of their R2 partners handling the TimeFinder and the SRDF/A session requests. The device pacing feature always applies the minimum possible pacing delay to keep the SRDF/A session active. By default, this additional pacing delay cannot exceed 50 milliseconds.

◆ If users limit the host response time by setting a user-defined value, the device pacing feature applies the minimum possible response time delay to keep the SRDF/A session active up to the user-defined maximum delay. If the device pacing option does not help resolve the high levels of cache utilization, the SRDF/A session drops.

With Enginuity version 5875 and the device pacing option, users have full support for TimeFinder operations from the SRDF/A R2 volumes. If the device pacing option is enabled, users can run the following TimeFinder operations from the SRDF/A R2 volumes that participate in an active SRDF/A session:

◆ Full-device or extent-level TimeFinder/Clone sessions using any copy option, where only precopy options were previously available

◆ Regular or multi-virtual TimeFinder/Snap sessions

If SRDF/A device pacing and Transmit Idle are both enabled, their behavior depends on when all SRDF links are lost:

◆ If SRDF/A device pacing is active when all SRDF links are lost and, consequently, Transmit Idle is invoked, the host write I/Os continue to be paced.

◆ If all SRDF/A links are lost and Transmit Idle is invoked, the SRDF/A device pacing feature cannot be invoked. Only the SRDF/A group pacing feature may be invoked, if cache usage conditions require that host I/O writes be paced. Again, the SRDF/A group pacing mechanism will not start extending the host write I/O response time until one or more links recover.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 187: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

4

This chapter presents these topics:

◆ Introduction ................................................................................................ 188◆ Functional comparison of common SRDF solutions ............................ 189◆ Performance comparisons SRDF/S and SRDF/A host applications . 195◆ Asynchronous replication: The major consideration............................ 197◆ Peak time..................................................................................................... 200◆ Locality of reference/write folding ......................................................... 204◆ Link bandwidth.......................................................................................... 209◆ Cache calculation ....................................................................................... 216◆ Balancing SRDF/A configurations.......................................................... 218◆ Network considerations............................................................................ 224◆ Analysis tools.............................................................................................. 245◆ EMC SRDF/A planning and design service ......................................... 248

Planning an SRDF/A orSRDF/A MSC

Replication Installation

Planning an SRDF/A or SRDF/A MSC Replication Installation 187

Page 188: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

188

Planning an SRDF/A or SRDF/A MSC Replication Installation

IntroductionThis chapter contains information regarding the core planning and design methodology to successfully implement an SRDF/A solution. This chapter includes detailed descriptions of the fundamental sizing parameters that must be considered in architecting and implementing SRDF/A configurations. Parameters that must be considered include: application workload, network bandwidth, and Symmetrix cache requirements.

This chapter also includes those items indirectly related to a successful SRDF/A implementation such as: network throughput considerations, Symmetrix performance planning, and other general Symmetrix configuration considerations. In addition, and as a precursor to the implementation-related chapters of this document, topics such as supporting data collection requirements, pertinent analysis toolsets, and service offerings are also covered.

This chapter is intended to describe SRDF/A configuration fundamentals. However, because of the complexity involved and the dependence upon specific analysis tools and best practices, and because EMC would like you to enjoy a successful implementation, it is not meant to be a stand-alone, step-by-step primer for customers. For this reason, it is recommended that you engage an EMC technical service offering for SRDF/A configuration implementation. The EMC replication planning and design service offering is described in detail at the end of the chapter. EMC field personnel may be consulted for additional details on obtaining this service offering.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 189: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Functional comparison of common SRDF solutionsFor many existing configurations, SRDF/A may be the logical next step from SRDF/S (Synchronous), or SRDF/AR (Automated Replication) environments. Understanding the core similarities and differences between these environments and SRDF/A is extremely important. The information contained in this, and ensuing sections, will assist in understanding both the advantages and technical challenges inherent in architecting and maintaining a successful SRDF/A solution.

Examples of environments which could benefit from converting to SRDF/A include:

◆ Applications experiencing elongated response times as a result of using SRDF/S.

◆ Applications which require protection beyond the synchronous distance limitations currently imposed by SRDF/S.

◆ Existing SRDF/AR applications which may be migrated to SRDF/A in order to decrease their RPO from several hours to minutes or even seconds.

EMC replication best practices recommend using BCVs on the secondary volumes to provide a consistent image during bulk resynchronization of the secondary volumes after an outage.

In addition to these SRDF/A comparisons to SRDF/S and SRDF/AR configurations, other applicable comparisons may also be found throughout this chapter.

SRDF/S (Synchronous) mode functionality review

Today, many applications use SRDF/S (Synchronous) mode to protect data on the primary storage subsystem. If these applications are experiencing response time elongation for all or specific devices currently constrained by synchronous replication, they may be good candidates for SRDF/A.

SRDF/S is a business continuance solution that maintains a real-time (synchronous) copy of data at the logical volume level in Symmetrix 3xxx, 5xxx, 8xxx, or Symmetrix DMX systems in the same or separate locations.

Functional comparison of common SRDF solutions 189

Page 190: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

190

Planning an SRDF/A or SRDF/A MSC Replication Installation

Symmetrix SRDF/S offers the following major features and benefits:

◆ High data availability

◆ High performance

◆ Flexible configurations

◆ Host and application software transparency

◆ Automatic recovery from a component or link failure

◆ Significantly reduced recovery time after a disaster

◆ Increased integrity of recovery procedures

◆ Reduced backup and recovery costs

◆ Reduced disaster recovery complexity, planning, and testing

The SRDF/S operation is transparent to the host operating system and host applications, since it does not require additional host software for duplicating data on the participating Symmetrix systems.

SRDF/S offers greater flexibility through additional modes of operation, specifically:

◆ Semi-synchronous mode

◆ Adaptive copy write pending mode

◆ Adaptive copy disk mode

SRDF synchronous mode maintains a constant, consistent copy of data on the secondary storage subsystem. The tradeoffs are the potential for degraded write performance and the cost of the high-capacity network links. The points raised here do not lessen the importance of an SRDF synchronous mode (zero data loss) solution. They are provided here and throughout this chapter as points of comparison to better facilitate understanding the differences between the SRDF/S and SRDF/A products.

With SRDF/S, the writes from the host to the primary subsystem do not complete until the write is resident in global cache of the secondary Symmetrix system. This implies that the logical volume is busy with the host (including read-following-write operations) throughout the SRDF operation.

The SRDF/S on process flow is illustrated in Figure 50 on page 191.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 191: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Figure 50 SRDF/S on process flow

SRDF/AR (Automated Replication) functionality review

This section describes the two basic SRDF/AR solutions, and contrasts their designs with the SRDF/A solution. As stated earlier, existing SRDF/AR (Automated Replication) applications may be converted to SRDF/A for two reasons:

◆ To decrease their RPO from several hours to minutes, or even seconds.

◆ To alleviate the additional physical disk requirement imposed by the use of TimeFinder BCVs on the primary and secondary subsystems.

EMC SRDF/AR is an automation solution that uses both SRDF and TimeFinder to provide a periodic asynchronous replication of a restartable data image.

The single-hop SRDF/AR configuration shown in Figure 51 on page 192 allows the secondary devices to lag the primaries in a controlled manner (depending on the resulting cycle time and the RPO goals).

SRDF/S links

1. I/O write received from host / server into source cache2. I/O is transmitted to target cache3. Receipt acknowledgement is provided by target back to cache of source4. Ending status is presented to host / server

Source Target

ICO-IMG-000200

1

2

34

Functional comparison of common SRDF solutions 191

Page 192: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

192

Planning an SRDF/A or SRDF/A MSC Replication Installation

Figure 51 SRDF/AR single hop replication

However, if a greater level of protection is required, the multi-hop SRDF/AR configuration illustrated in Figure 52 can provide long distance disaster restart with zero data loss. When compared with other traditional disaster recovery solutions, with their long recovery times and high data loss potential, disaster restart solutions using SRDF/AR provide remote restart with a very short restart time, and relatively low data lag.

Figure 52 SRDF/AR multi-hop replication

SRDF/AR offers data protection with dependent-write consistency over long distances at the cost of additional physical disks for the TimeFinder BCVs and RPOs measured on the order of many minutes

STD

0000

BRBCV

0210

R2

R1BCV

01C0

Local Symmetrix0001

Host

Remote Symmetrix

ICO-IMG-000201

1a 1b 1c

R2R1

0040

R1RRBCV

01A1

R2

R1RBCV

01A0

Symmetrix0001

Host

SymmetrixSymmetrixICO-IMG-000202

2a

2b2c

2d

Hop 1 Hop 2Local

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 193: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

to hours; this compares to minutes, or even seconds with SRDF/A. SRDF/AR protection is accomplished by using geographically-separated replicas with specific hardware and software products necessary for automation purposes.

The SRDF/AR replication process is typically automated with TimeFinder/Mirror in a mainframe z/OS environment. These automation products coordinate the creation of replicas by using the EMC TimeFinder and SRDF products as necessary to ensure that a dependent-write consistent copy of data from the primary Symmetrix system is transferred to the secondary Symmetrix system.

The SRDF/AR’s BCVs assume dual personalities as follows:

◆ When established to their respective standard volumes, they become true mirrors of those standard volumes, and remain true mirrors until they are split (from their standard volumes).

◆ Once split from their respective standard volumes, they become SRDF primary volumes and are able to participate in an SRDF relationship.

Once these BCV volumes are split and the SRDF links are available, these combination BCV/SRDF volumes are used to incrementally transfer all updated data to the secondary site. Because this is an incremental copy, only changed (updated) data since the last cycle is transferred.

Difference between synchronous and asynchronousThere is a fundamental difference between synchronous and asynchronous replication protection. In synchronous replication described previously in “SRDF/S (Synchronous) mode functionality review” on page 189, any bottleneck or slowdown in any of the components comprising the system impacts the time required to replicate current and future I/Os. This impact is often manifested through an accompanying increase in application response time. Slowdowns or bottlenecks can occur on the network links, the secondary site disk devices, the disk adapters (DAs) or the remote adapters (RAs). Despite the slowed pace of I/Os and the accompanying increases in application response times, SRDF in synchronous mode (SRDF/S) does not drop or halt replication.

In contrast, during asynchronous replication, whenever similar slowdown conditions occur, there is no impact on the application response time. In some cases, however, the slowdown phenomenon

Functional comparison of common SRDF solutions 193

Page 194: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

194

Planning an SRDF/A or SRDF/A MSC Replication Installation

may cause other complications. One such complication arises when the SRDF network links can no longer keep up with the volume of host writes; a phenomenon referred to as link saturation. Another complication that can arise is if the secondary site cannot accept the volume of writes coming off the SRDF link; a phenomenon known as subsystem saturation. The host, not being aware of these issues, continues to write to the primary system, and may eventually fill up the cache on the primary system, which impacts SRDF/A’s ability to function smoothly.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 195: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Performance comparisons SRDF/S and SRDF/A host applicationsWhen SRDF/A is implemented in a balanced configuration, the impact to the host writes is negligible.

One of the main advantages of SRDF/A is that it provides a dependent-write consistent point-in-time copy at the secondary site. This copy lags the primary volume by a small amount. This is demonstrated in Figure 53.

Figure 53 SRDF/A replication steps

SRDF/S mode is a true “no data loss” solution; however, there is a tradeoff. Since every write I/O must be sent across the network and acknowledged at the secondary site before it can be acknowledged back to the host (steps 2–3 in Figure 54 on page 196), there is an accompanying increase in I/O response time; this increase in response time elongates as the replication distance increases. Further, SRDF/S configurations must incorporate sufficient link bandwidth to ensure that writes never spend time in a queue waiting to be sent across the link.

Source Target

ICO-IMG-000203

1

2 3 4

Unlimited distance

Performance comparisons SRDF/S and SRDF/A host applications 195

Page 196: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

196

Planning an SRDF/A or SRDF/A MSC Replication Installation

Figure 54 SRDF/S replication steps

In many cases, these synchronous requirements are too stringent and costly for response-time-sensitive applications such as small block OLTP databases. This is especially true for long distances since bandwidth becomes even more expensive, and the accompanying increase in response time, which is proportional to the distance, may be too high for practical purposes.

An alternative solution which greatly minimizes this performance impact is asynchronous replication using either SRDF/AR with disk buffering and extended cycle times (many minutes to hours), or SRDF/A with cache buffering and minimum cycle times (seconds to minutes).

Source Target

ICO-IMG-000204

1

4 3

2

Limited distance

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 197: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Asynchronous replication: The major considerationThe most important performance assumption in any cache-based asynchronous solution is that, on average, the amount of writes coming into the system must be equal to the amount of writes going out, as illustrated in Figure 55.

Figure 55 Inflow and outflow of writes are required to be equal on average

If this consideration is violated for extended periods of time, no amount of cache will enable SRDF/A to continue functioning. An SRDF/AR solution should be considered instead. The following sections explain exactly what the phrase “on average” means and its dramatic effect on the viability of SRDF/A.

In general, a functioning SRDF/A system can be viewed as a dam on a river, where the inflow and outflow must even out except for short periods of time (analogous to workload bursts) when the inflow may exceed the outflow. The dam’s capacity, though finite, is the ability to contain the inflow when the outflow is constrained; the dam overflows when the inflow consistently exceeds the outflow such that overall capacity is exhausted.

The analysis methodologies, which are explored in subsequent sections of this chapter, assume that the configurations and the workloads of both the primary and secondary Symmetrix sites have been balanced. Balanced configurations are those in which the hardware and other necessary resources are comparable with each other in terms of capability and capacity. In fact, it embodies a design which enables the ongoing destaging of data to the secondary devices at least as fast as the data is being accrued at the primary site. If, for example, the DAs or physical disks are too heavily utilized and can no longer accept writes from global cache without incurring

Average writeinput bandwidth

Average write outputbandwith = Averagewrite input bandwidth

ICO-IMG-000205

Asynchronous replication: The major consideration 197

Page 198: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

198

Planning an SRDF/A or SRDF/A MSC Replication Installation

excessive delay penalties (device queuing), then global cache may saturate and compromise SRDF/A’s ability to provide remote data protection.

For persistently unbalanced configurations, EMC has alternative solutions that support a far more dramatic imbalance between the input and output capabilities of the systems. One such solution is an SRDF/AR configuration that uses TimeFinder BCVs to essentially buffer the writes to disk (rather than the cache), on both the primary and secondary subsystems in order to maintain a consistent copy with long cycle times, as shown in the example in Figure 56.

Figure 56 SRDF/AR alternative solution

The use of physical disks, for the buffering of writes rather than the global cache, allows the primary and secondary volumes to be as much as 100 percent out of synchronization in SRDF/AR.

Fundamental SRDF/A variablesPrevious sections have compared SRDF/A’s functionality to that of other EMC Symmetrix replication offerings. The following sections cover the main parameters which must be taken into account during the proper planning of an SRDF/A solution.

The seven main parameters that govern SRDF/A’s robust operation are:

◆ Peak time and duration

◆ Locality of reference

◆ Link bandwidth

Source Target

ICO-IMG-000206

Unlimited distance

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 199: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

◆ Amount of cache

◆ Service actions

◆ Network resource outages

◆ Future growth

Asynchronous replication: The major consideration 199

Page 200: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

200

Planning an SRDF/A or SRDF/A MSC Replication Installation

Peak timeThe graph in Figure 57 depicts a typical write workload. In order to maintain the average inflow as approximately equal to the average outflow as discussed earlier, the link bandwidth would need to be configured so that it could easily handle at least the minimum average write workload (the red dotted line). If the available bandwidth stays consistently below this average, SRDF/A will inevitably fail since the ability of the global cache to buffer the incoming writes will eventually be exhausted.

Figure 57 Typical write workload and average workload

Activity level and duration of peak timeWhile “Asynchronous replication: The major consideration” stated that the inflow and outflow of writes into the system needs to be equal on average, it did not define the interval over which this average needs to be calculated since there are different results when averaging the load across five minutes, an hour, or a day. These results can be misleading if the interval of study is not carefully chosen; this problem is related to the time granularity involved and can be referred to as the granularity averaging problem.

Writes

Average

W

Time

ICO-IMG-000207

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 201: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

The peak time is used to evaluate the busiest time period during the workload. For example, if you set the peak time to be x minutes, the busiest x minute time interval is used to determine the average write peak.

Mathematically, there are two fundamental characteristics which need to be well understood in dealing with peak value behavior: the duration of the peak and the amplitude of the peak. Duration of the peak is an x-axis derived value usually measured in units of time, and is meant to convey information on how long a particular peak persists; the longer the duration then the more sustained, or pronounced, this behavior that has to be factored into the analysis, and therefore, the more it affects the average. In a tightly complimentary manner, the amplitude of the peak is a y-axis derived value which serves to describe the intensity of the peak, and is usually measured in units of I/Os per second, or even Megabytes per second (MB/s); this is a definitive, and quantitative measure of the burstiness of the workload in question, and failure to accurately measure it and understand its ramifications will ultimately lead to grossly underconfigured SRDF/A subsystems. Further, it is important to understand not just each of the characteristics of peak behavior, but the cross product of them—duration combined with amplitude. While it is often easier and more convenient to derive an average over a longer interval, it is instructive to note that the granularity of the peak—both characteristics already discussed would be deemphasized, leading to erroneous conclusions about peak behavior, and a less than required bandwidth. To properly assess peak behavior, it is strongly recommended that an iterative study of multiple small time intervals be undertaken; in fact, the more erratic or bursty the incoming writes, the more iterative the study should become. The average write peak directly impacts the calculation of the link bandwidth and the amount of cache required for buffering the writes.

The peak time (T) represents the time the peak write workload was higher than the SRDF link bandwidth, as shown in Figure 58 on page 202. The height of this peak (H) represents the amount (typically in MB/s) the peak write workload exceeded the SRDF link bandwidth. The product of the two (T ∗ H) is roughly equivalent to the additional cache memory necessary to support this specific peak sample.

In most cases, increasing the peak time (while leaving the link capacity constant) increases the amount of cache required as well as increasing the RPO time.

Peak time 201

Page 202: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

202

Planning an SRDF/A or SRDF/A MSC Replication Installation

Figure 58 Peak time

Regardless of the data collection method in use, it is very important to understand that in collecting performance or workload data, the minimum interval of the collection limits the peak duration that may be accurately evaluated. This is because substantial data averaging takes place over the duration of each sample.

The difference in the workload collection interval or duration as a function of the collection time is shown in Figure 59 on page 203. The graph shows the workload as collected in both a 10-minute average as well as a 2-minute average. The differences in the peaks are pronounced and readily obvious. This has a direct bearing on the SRDF/A cache and network bandwidth requirements because, at the very minimum, sufficient resources must be allocated to address these peaks (as described in the previous section).

Writes

T

Time

ICO-IMG-000208

H

Peak time

R - Linkthroughput

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 203: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Figure 59 Peak workload depends on collection interval

For example, if a workload collection tool samples data at 15-minute intervals, the accuracy of an analysis on any individual sample may not be accurate for projections of less than 15 minutes. Because of this phenomenon, it is recommended that data be collected in line with the proposed evaluation period (a 2-minute cycle requirement = 2-minute sample durations), or the existing data substantially inflated to account for possible spikes in the workload (30-second cycle requirement = 10-minute sample duration * reasonable inflation factor, say 20 percent or 1.2).

The latter approach tends to be far less accurate because it can be strongly biased by the appropriateness of the inflation factor selected. Unfortunately, limitations regarding the granularity of the data collection tool, or the ability of the collection tool to adversely impact the workload performance itself during “tight” collection intervals may necessitate such a method.

2 minutes average10 minutes average

MB

/sec

100

90

80

70

60

50

40

30

20

10

0

2 min = 73 MB/sec

10 min = 40 MB/sec

2 min peak = 89 MB/sec

10 min peak = 71 MB/sec

Time

23:0

4:00

23:1

0:00

23:1

6:00

23:2

2:00

23:2

8:00

23:3

4:00

23:4

0:00

23:4

6:00

23:5

2:00

23:5

8:00

00:0

4:00

00:1

0:00

00:1

6:00

00:2

2:00

00:2

8:00

00:3

4:00

00:4

0:00

00:4

8:00

00:5

2:00

00:5

8:00

01:0

4:00

01:1

0:00

01:1

6:00

01:2

2:00

01:2

8:00

01:3

4:00

01:4

0:00

01:4

8:00

01:5

2:00

01:5

8:00

02:0

4:00

02:1

0:00

02:1

6:00

02:2

2:00

ICO-IMG-000210

Peak time 203

Page 204: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

204

Planning an SRDF/A or SRDF/A MSC Replication Installation

Locality of reference/write foldingLocality of reference is a term used to describe a phenomenon associated with the apparent clustering of I/O access; both read and write I/Os. There are two key properties of locality of reference: spatial locality and temporal locality. Spatial locality is a reference to the behavior of the I/Os to cluster their accesses to localized neighborhoods. These neighborhoods could be:

◆ The same record or block

◆ The same, or closely adjacent tracks of the disk device

◆ The same, or closely adjacent cylinders of the disk device

When write accesses reference the same track, the track can be updated successively and transmitted just once, or at the very worst, fewer times than the number of accesses dictate.

Temporal locality is a reference to the behavior of the I/O accesses over time, and can be viewed as the frequency, or rate, at which the I/Os arrive to their respective locations. A good understanding of both temporal and spatial locality can yield very useful results in configuring I/O subsystems, usually offering very optimal configurations.

Write folding exploits both characteristics of locality of reference, as discussed above, to reduce the number of times a track, which was written to many times over a given interval, should be sent over the SRDF/A links. Sending just the latest updated version of the track for any given interval helps to decrease the required network bandwidth.

In SRDF/A, locality of reference helps reduce the amount of Symmetrix global cache used and also reduces the required SRDF link bandwidth. This is a major advantage over many other competitive asynchronous solutions, and in particular, over all prevailing synchronous solutions where every write is sent across the link, and the principle of locality of reference is not utilized at all.

The effect of locality of reference on reducing the amount of cache is different from its effect on reducing network link bandwidth because cache is managed at a fixed-size track level (cache slot), while the link is managed at a variable-size block level. The following two sections describe the effect of locality of reference on both the cache and link bandwidth in greater detail.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 205: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Symmetrix cache locality of reference In many cases, different host system writes go to the same location on disk. Since Enginuity can rewrite the new information to an existing slot, it is not necessary to allocate a distinct cache slot for each write.

Figure 60 shows the locality of reference derived from many I/O traces collected from Symmetrix systems at various customer sites. The graph shows the percent of rewrites to the same physical disk location as a function of the SRDF/A cycle time.

Figure 60 Physical disk locality of reference sample

Essentially, the customer data suggests that the longer the SRDF/A cycle time, the higher the chance that a host system write will hit the same cache slot repeatedly. A similar behavior is also observed in SRDF/AR environments, where a cycle time of hours (typical of SRDF/AR) increased the locality of reference gains in an even more pronounced manner. In this example, 15 percent less cache would be needed by SRDF/A if the cycle target time could be set to 60 seconds rather than 5 seconds.

70%

60%

50%

40%

30%

20%

10%

0%

0 30 60 90 120 150 180 210 240 270 300 330 360 390 420 450 480

Average % re-write to a sector

Cycle time (seconds)

% w

rite

hit (

per

slot

)

ICO-IMG-000211

Locality of reference/write folding 205

Page 206: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

206

Planning an SRDF/A or SRDF/A MSC Replication Installation

SRDF link locality of referenceLocality of reference also improves the efficiency of the SRDF network links. Even if there are multiple data updates (that is, repeated writes), in the same cycle, Enginuity sends the data across the SRDF links only once.

This is a major advantage over current competitive asynchronous replication solutions in which every write is sent across the link, and the locality of reference benefit is not exploited. These asynchronous solutions consume as much bandwidth as a synchronous solution that must (by definition) send every I/O across the links.

The advantage gained from the locality of reference on the SRDF links is not necessarily the same as the advantage gained in cache memory. The main difference has to do with the fact that I/Os sent on the SRDF links are usually the same size as the host I/Os. The logic is similar to SRDF adaptive copy write pending mode which sends blocks at a time, as opposed to SRDF adaptive copy disk mode where the system always sends full tracks.

In such a case, the gain in bandwidth efficiency from the locality of reference is mainly from rewriting to the same block and not rewriting to the same track.

The rules for combining a number of small blocks to one larger I/O are complex and are not discussed here. However, there are many instances when the subsystem combines the original I/Os and sends them as one large I/O across the link. Even if this operation does not necessarily decrease the bandwidth, it does decrease the number of I/Os that the RA handles, and thus reduces the processing overhead per host I/O.

Figure 61 on page 207 shows both the locality of reference and the concatenation of small blocks to one larger I/O for transmission.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 207: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Figure 61 Synchronous and asynchronous block transfer comparison

Figure 62 on page 208 shows the locality of reference derived from many I/O traces collected from Symmetrix systems at various customer sites. The graph shows the percent of rewrites to a given device address as a function of the cycle time, and clearly illustrates that even with short cycle times, the rewrites are on average ~20 percent; this rewrite percentage can be viewed as a measure of efficiency which could directly impact the required configurable bandwidth.

Applications tend to write data in proximity of time and place

Track 0 Track 1 Track 2

Synchronous mode: 10 I/Os, 10 blocksAsynchronous mode: 3 I/Os, 7 blocks

Asynchronous mode has:Less bandwidth (7 vs. 10 blocks) and less SRDF overhead (3 I/Os vs. 10)

ICO-IMG-000212

Locality of reference/write folding 207

Page 208: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

208

Planning an SRDF/A or SRDF/A MSC Replication Installation

Figure 62 SRDF link locality of reference sample

It should be noted that these results are averages and, in some workloads, the locality of reference is smaller than the average shown in the graph. For example, when the workload is 100 percent sequential writes, the locality of reference is zero since there are no re-references. Therefore, if the peak write workload consists of mostly sequential write activity such as backup, it is best to ignore the link locality of reference.

35%

40%

45%

30%

25%

20%

15%

10%

5%

0%

0 30 60 90 120 150 180 210 240 270 300 330 360 390 420 450 480

Average % re-write to a sector

Cycle time (seconds)

% w

rite

hit (

per

slot

)

ICO-IMG-000213

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 209: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Link bandwidthOne of the main advantages of the SRDF/A solution is the lower link bandwidth requirement when compared with synchronous mode; in synchronous mode, the link bandwidth needs to be at least as high as the maximum peak write workload to ensure that there is no adverse impact on host I/O response time.

As long as there is sufficient cache in which to buffer the writes, the maximum bandwidth requirement can be relaxed in SRDF/A for two reasons:

◆ Writes are buffered in Symmetrix global cache, which accommodates the short duration high write peaks that normally could exceed the link bandwidth.

◆ The effect of locality of reference. Unlike synchronous mode, not all host writes have to be sent across the link as they occur with SRDF/A. This has the overall effect of reducing the link bandwidth required as discussed previously.

Reducing the peak bandwidth requirements (where available bandwidth = peak workload) will, in most cases, increase the amount of cache required. This has the effect of increasing the cycle times and also the RPO. When the main goal is to minimize the RPO, the bandwidth requirement becomes close to the synchronous bandwidth requirement minus the locality of reference benefit as explained in the previous section.

The concept of average write peak is very important in SRDF/A, because it determines the requirements for network link bandwidth and cache.

As explained in “Fundamental SRDF/A variables,” it is important that the time interval over which the average is calculated be carefully chosen both from a time of day and duration point of view. This is important since the average write throughput may change substantially depending on whether the average is taken over a few minutes or over a few hours. The time of day selection needs to correspond to the busier, or busiest times of write activity, and the duration selection has to be sufficient in order to contain the peak activity.

Figure 63 on page 210 shows the relationship between the peak time selected and the link bandwidth required. In most cases the longer the peak time selected, the lower the calculated average peak writes.

Link bandwidth 209

Page 210: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

210

Planning an SRDF/A or SRDF/A MSC Replication Installation

This has the effect of lowering the required link bandwidth (even at the expense of higher cache requirements), lengthening the cycle time, and elongating the RPO.

Figure 63 Peak bandwidth depends on the collection interval

Link bandwidth estimates

Estimating the link bandwidth generally involves summing the writes to all of the volumes that will participate in the SRDF/A process. Any I/O performance analysis tool capable of collecting the write throughput (KB/s, IOP/s, I/O size) could facilitate this analysis.

The EMC ControlCenter Performance Manager (previously called Workload Analyzer or WLA) or STP Navigator (EMC internal version) is suitable for this purpose when collecting data from a Symmetrix system. Figure 64 on page 211 shows an example of STP Navigator plotting the “Kbytes written per second” metric for a specific Symmetrix device (0079 in this case). The interfaces of the tools above are nearly identical, differing only in the supported data primaries and the internal/external availability. “Analysis tools” discusses additional information about the available analysis tools.

Link required for sync mode

Average across 10 min

Average across20 min

Time

Writ

es

13:00 13:20ICO-IMG-000214

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 211: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Figure 64 STP Navigator “Kbytes written per second” metric example

Estimating future growth is essential for a successful implementation. Many changes and business growth can occur during the time interval between the planning and implementation phases. These have to be carefully accounted for. Whenever a replication sizing analysis is based solely on a point in time, or small interval of time data collection, albeit of peak duration, the resulting analysis will be outdated before, or soon after it is implemented.

To address this risk, the analysis effort must incorporate a “growth factor” which could account for predictable application workload growth for six months to a year, at a minimum. Growth factors of 30–50 percent, even 100 percent in specific cases, would not be unreasonable based on the expected increase in storage workloads over a one-year period.

Link bandwidth 211

Page 212: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

212

Planning an SRDF/A or SRDF/A MSC Replication Installation

There is no “ideal” growth factor for workload growth planning as it is based on very specific business metrics that vary from customer to customer. The analysis team, therefore, needs to provide an estimate for existing applications as well as future net new growth over the applicable period of time.

As explained in previous sections, the time interval in which the data is collected also determines the minimum peak time that can be analyzed. This concept is critical in accurately determining network bandwidth. If the system collects data at a 10-minute interval, the accuracy of the calculation will be limited accordingly, and conclusions about intervals shorter than 10 minutes cannot be drawn. Further, the minimum peak time for any subsequent analysis will also be limited to 10 minutes. The best practice in such a case would be to ensure that the RPO can elongate to at least 10 minutes.

While sometimes impractical, the most accurate way to estimate the actual link bandwidth is to collect the peak time data with the proposed Symmetrix volumes in SRDF adaptive copy write pending mode. This is because SRDF/A network link throughputs very closely approximate the throughputs achieved with SRDF adaptive copy write pending mode. The algorithms of the SRDF adaptive copy write pending mode and SRDF/A are very similar, as the I/O size on the link will match the I/O size written by the host system.

In contrast, if SRDF is functioning in adaptive copy disk mode, the bandwidth may be somewhat skewed. In this mode, full tracks are sent across the links, which in some cases cause the throughput on the link to be different from that experienced with the actual host write workload. If the locality of reference is high, the link throughput is lower than required; if the locality of reference is low, the link bandwidth appears to be higher than the host write workload.

If the estimate is performed from data collected while SRDF was in synchronous mode, then the link bandwidth will be close to the real bandwidth in SRDF/A mode (since the link I/O size is equal to the host I/O size); however, synchronous mode may be constraining some of the throughput due to the increased response time, and thereby contributing to the latent I/O demand phenomena. For example, if the link distance is 100 km, even with GigE links there is an extra 1 ms of propagation delay that can impact the write throughput of a response time-sensitive application. In such a case, EMC recommends increasing the “growth” factor in the estimate.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 213: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Similar to SRDF/S mode, there may be other conditions that impact the write throughput. For example, when changing the configuration from a regular RAID 1 (mirrored) configuration to RAID 10 or meta volumes, some physical drive bottlenecks may occur and cause a decrease in the write throughput. The same reasoning applies when measuring the throughput of older hardware subsystems that are to be migrated to newer ones. This too, will require increasing the workload “growth” factor in the estimate as described previously.

A common issue with planning link bandwidth is insufficient SRDF port and CPU processing power. SRDF RA utilizations can potentially run very high, making them the bottleneck, rather than network bandwidth. Determining the correct number of RAs is covered in more detail later in this chapter.

Bandwidth burst exception

The default minimum cycle time of SRDF/A is 30 seconds. This section explains why it is sometimes best to keep the default minimum cycle time in order to gain as much locality of reference as possible.

In many cases, especially during nonpeak periods, the actual transfer time of the data contained in a cycle from the primary subsystem to the secondary subsystem takes less than the minimum cycle time. From the network’s point of view, this type of workload looks like a short burst of I/Os every cycle switch. In many cases, and especially with GigE, the sum of the bandwidth across all ports defined on the Symmetrix far exceeds the available network bandwidth. This is usually a result of planning for redundancy; for example, planning for a port, board, or link failure. This can cause the Symmetrix ports to overrun the network, and may result in error conditions in both the Symmetrix and the network.

To avoid these issues, EMC recommends that users analyze these cases carefully and use an option on the Symmetrix GigE (RE) port to rate limit the throughput.

Link bandwidth 213

Page 214: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

214

Planning an SRDF/A or SRDF/A MSC Replication Installation

Determining number of SRDF remote adaptersThe SRDF RA requirements vary depending on a number of key factors:

◆ Symmetrix family and Enginuity level

◆ RA type (FC, GigE, ESCON)

◆ Workload volume block size

◆ Compression on Gig-E RAs

RAs have different limits depending on the Symmetrix family and the Enginuity level in use; these need to be taken into account during the planning phase. There are internal tools such as the ET Tool or SymmMerge, that can help estimate the RA utilization in the various configuration and workload scenarios to ensure that the RA is never the bottleneck of the link.

EMC analysis tools (such as the ET Tool described in a later chapter) can estimate SRDF RA requirements based on the above inputs. These tools take many complex factors into account; for example, how fibre RA and port throughput performance has improved between DMX 800, DMX 1 and 2, DMX-3, and DMX-4 series Symmetrix systems.

The ET Tool bases its results on the average write block size for each volume in an interval. Based on the measured performance envelope of the specific RA and the Symmetrix family, the tool can determine the amount of time in milliseconds that each specific volume needs to transmit the total number of accumulated writes during an interval. Given that the adapter throughput results are block-size dependent, the analysis must also look at the block size and determine which adapter/frame constant should be applied.

The way to interpret the RA time usage results is to consider that each RA has 1,000 milliseconds of usage in 1 second. For example, Figure 65 on page 215 shows a peak RA usage requirement of just under 2,000 milliseconds. This necessitates a minimum of two RAs in order to handle the workload. For redundancy purposes (highly recommended), three RAs would be required in order to support the workload.

If redundancy is not planned for and an adapter should fail, the remaining adapters will likely not have enough capacity to sustain the average throughput, which is at the core of every successful SRDF/A solution.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 215: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Figure 65 ET tool remote adapter analysis results

Tota

l RA

tim

e (m

Sec

/Sec

)

Time (8/31/200n to 9/2/200n)

RA Time Chart

RA throughput limit (mSec/Sec)Total RA time (mSec/Sec)

ICO-IMG-000215

19:1

0 G

MT

20:4

0 G

MT

22:1

0 G

MT

23:4

0 G

MT

1:10

GM

T

2:40

GM

T

4:10

GM

T

5:40

GM

T

7:10

GM

T

8:40

GM

T

10:1

0 G

MT

11:4

0 G

MT

13:1

0 G

MT

14:4

0 G

MT

15:1

0 G

MT

17:5

0 G

MT

19:2

0 G

MT

20:5

0 G

MT

22:2

0 G

MT

23:5

0 G

MT

1:50

GM

T

3:20

GM

T

4:50

GM

T

6:20

GM

T

7:50

GM

T

9:20

GM

T

10:5

0 G

MT

12:2

0 G

MT

13:5

0 G

MT

15:2

0 G

MT

16:5

0 G

MT

18:2

0 G

MT

4500

4000

3500

3000

2500

2000

1500

1000

500

0

Link bandwidth 215

Page 216: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

216

Planning an SRDF/A or SRDF/A MSC Replication Installation

Cache calculationBecause a Symmetrix system utilizes its global cache for all operations, it is essential to formulate an accurate estimate of the amount of cache that SRDF/A requires. This is necessary to ensure that other operations which also require cache are not impacted, and that there is sufficient cache to enable the smooth operation of SRDF/A. The following section describes the cache calculations needed to identify the amount of SRDF/A specific cache memory required for a specific configuration. The resulting SRDF/A cache quantity will exceed the minimum cache required for the overall smooth operation of the Symmetrix and, when added together, reflect the new total cache requirement for the Symmetrix system.

Cycle time and size calculation

The cycle time will stay constant and close to the minimum cycle time as long as the number of writes coming into cache is less than what the SRDF links can handle, and there are no bottlenecks on the secondary subsystem (a condition described in detail in subsequent sections). When the incoming writes exceed the SRDF link throughput, or when the restore operation is elongated, the writes are buffered in cache. When the buffer exceeds the amount that can be sent in the minimum cycle time, the cycle time increases dynamically.

Cache sizing example

The following example involves a Symmetrix system with SRDF network links capable of handling a 20 MB/s workload and a host system that writes 18 MB/s on average with a one hour (3600 s) sustained peak write rate of 40 MB/s. It is assumed that the minimum SRDF/A cycle time is t = 30 seconds. For the duration of the peak workload, the host writes into cache faster than the SRDF link can send these writes to the secondary subsystem. In this example, the factor between the maximum host write and SRDF link capability is F = 40/20 = 2.

To determine the cache requirement of the first cycle upon incurring the peak, multiply the initial cycle time by the peak host write rate as follows:

30 s * 40 MB/s = 1200 MB

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 217: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Consider that, during the first cycle transfer to the secondary subsystem, the primary Symmetrix system is accruing the next cycle in cache.

The next cycle therefore takes 60 seconds, because SRDF’s speed is 20 MB/s resulting in:

60 s * 20 MB/s = 1200 MB

During those 60 seconds, the host writes 2,400 MB, forcing the next cycle to take 120 seconds, and so on.

Thus, the cycle lengths double each time. Estimating the duration of cycle number n can be derived as:

n = tFn s

The cache needed for this subsystem includes the cache required to buffer the first 30 seconds, plus the time the host workload exceeded the SRDF link bandwidth as follows:

20*30 + 3600*(40 – 20) MB = 72.6 GB

Note: For easier understanding, the examples in this section used simplified calculations and do not take into account some parameters, such as changing the I/O size with time or having a random or sequential workload. EMC simulation tools do take into account all the necessary parameters.

Cache calculation 217

Page 218: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

218

Planning an SRDF/A or SRDF/A MSC Replication Installation

Balancing SRDF/A configurationsOne important goal of designing an SRDF/A configuration is to insure that it is balanced. This section explains what unbalanced configurations are, and the expected issues surrounding them.

As mentioned previously, the fundamental difference between SRDF/S mode and SRDF/A mode can be summarized as follows:

◆ With SRDF/S, whenever there is a bottleneck anywhere in the I/O path, the whole I/O chain slows. This has the effect of increasing the application response times and constraining host throughput. However, regardless of the severity of the bottleneck, as long as any SRDF link stays up, SRDF/S itself does not stop replicating.

◆ With SRDF/A operating without Transmit Idle or DSE, if there is a bottleneck anywhere in the I/O path, writes continue from the host to the cache until, at some point, the writes saturate the cache and SRDF/A drops.

Symmetrix cache managementThe purpose of the Symmetrix SRDF/A cache management algorithm is to enable SRDF/A to use as much cache as needed, up to a limit. The following sections explain the rules when reaching the volume or logical device, and system-level cache limits on both the primary and secondary Symmetrix systems.

Volume (logical device) write pending limitsVolume (or logical device) level write pending flags are used by the Symmetrix disk adapter (DA) to locate cache slots containing pending writes; these are writes that must be destaged to physical disk. The volume write pending limit is a general Symmetrix limit created to ensure that sustained write activities for an individual volume do not consume all the available cache. This volume write pending limit is assigned during IML of the Symmetrix, and is based upon such factors as the number of logical volumes, volume size, and amount of available cache memory.

Symmetrix cache slots associated with SRDF/A are marked as “write pending” to the secondary subsystem devices. These “SRDF/A slots” are not counted against the primary volume write pending limit that includes only those slots with writes (also known as write pendings)

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 219: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

owed to the primary mirrors. In other words, the primary volume write pending limit is not affected by I/Os waiting to be sent by SRDF/A to the secondary site.

On the secondary site, a secondary volume reaching the volume write pending limit may impact SRDF/A operations. This may occur when the N-2 cycle on the secondary subsystem (the apply cycle) is performing the restore operation. The restore operation marks the N-2 cycle write I/Os as write pending for the disk adapter to be destaged to physical disk. Although this is generally a very fast cache operation when the volume reaches its write pending limits, the restore operation cannot continue until some cache slots are destaged to disk.

If the affected restore operation continues for a longer time than the current cycle time, the cycle switch is elongated until the restore operation is complete, as shown in Figure 66. This has the effect of impacting the cycle time of subsequent SRDF/A cycles until the system workload decreases and the cycles are allowed to catch up.

Figure 66 Elongated restore (N-2) cycle may impact other cycles

If allowed to continue past the normal time for a cycle switch, this condition eventually causes the N (capture) cycle on the primary Symmetrix system to continue to buffer the host writes in cache. As more and more global cache is consumed by the capture cycle on the primary Symmetrix system, less cache becomes available for normal volume write pending operations. In extreme cases, when the system write pending limit is reached on the primary Symmetrix system, it will cause SRDF/A to drop and cease replicating.

N-2Active

NActive

N-1Inactive

N-1Inactive

ICO-IMG-000216

Source Target

12

3

Capture Transmit ReceiveApply

Delta Set

1. Host writes into active cycle N2. Inactive cycle N-1 sends data to target3. Completed cycles (N-2) validated

Balancing SRDF/A configurations 219

Page 220: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

220

Planning an SRDF/A or SRDF/A MSC Replication Installation

System write pending limitThe system write pending limit is a global Symmetrix threshold created to ensure that writes coming into the subsystem consume no more than a maximum of 80 percent of the total Symmetrix cache. That being said, there are additional system write pending limitations specific to some Symmetrix models, such as the DMX800 described below. In general, when the Symmetrix system reaches the system write pending limit, the Symmetrix limits additional writes allowed into cache until the disk adapter is able to destage slots to physical disk.

As opposed to the volume write pending limit described in the previous section, the cache slots associated with SRDF/A are counted against the system write pending limit. When the system write pending limit is reached, due to SRDF/A cycles extending on either the primary or secondary Symmetrix systems, SRDF/A drops and ceases replicating. To avoid this full cache condition, the SRDF/A environment must be properly designed and configured. As described previously, variables that may cause an imbalance in the SRDF/A environment include bandwidth, global cache usage, and workload allocation for the specific implementations.

In an effort to assist users to better manage global memory full conditions, Enginuity 5671 and later provides a method for limiting the amount of write pending slots (that is, cache) available to SRDF/A. In addition to this, DSE can also be used to alleviate potential cache full conditions.

DMX-800 considerationsAs a result of its smaller batteries, the DMX-800 and similar models typically have a much lower system write pending limit, and therefore represent a special case for planning SRDF/A. These limitations were implemented to ensure that all writes are destaged to the physical drives in the case of two consecutive power failures. The smaller batteries restrict the time the system may remain active during a power failure. This limits the amount of cached writes which may be destaged to physical disk while the batteries are active, necessitating artificial system write pending limits.

As a result, the amount of cache configured for writes depends on the physical DMX-800 configuration. For example, the number of disk enclosures (DAEs) and the number of physical disks per disk enclosure has a dramatic effect on write pending cache as shown in Table 6 on page 221.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 221: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Table 6 DMX-800 system write pending limit by system configuration

Adding cache to a DMX-800 does not always help SRDF/A (because the system write pending count does not always increase), therefore, EMC does not recommend using a DMX-800 or similar model for SRDF/A operations without a carefully detailed analysis.

Similar write pending considerations apply to other Symmetrix systems. For example, in DMX-2000 and DMX-3000, the maximum system write pending count is ~100 GB. Similarly, the DMX-1000 maximum system write pending limit is ~50 GB. In the case of the DMX-3, there are no artificial limitations, and the system write pending count is always 80 percent of the total available cache.

Unbalanced SRDF/A configurations

Another related issue comes about when there is an imbalance between the "speed" of the primary device and the "speed" of the secondary device. This setup is known as an unbalanced configuration. Examples of configurations where the primary devices are "faster" than the secondary devices may vary as follows:

◆ The primary devices are mirrored and the secondary devices are RAID 5 (most likely the secondary subsystem contains fewer drives).

◆ RAID 10 is used on the primary devices and RAID 1 is used on the secondary devices. This can be problematic since each primary device is much faster than its secondary device counterpart.

Overall WP MB in DMX800 as function of avg # disks per DAE

#Disks 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

2DAE 625 781 938 1,094 1,250 1,406 1,563 1,719 1,875 2,031 2,188 2,344 2,500 2,656 2,813

3DAE 938 1,172 1,406 1,641 1,875 2,109 2,344 2,578 2,813 3,047 3,281 3,516 3,750 3,984 4,219

4DAE 1,250 1,563 1,875 2,188 2,500 2,813 3,125 3,438 3,750 4,063 4,375 4,688 5,000 5,313 5,625

5DAE 1,563 1,953 2,344 2,734 3,125 3,516 3,906 4,297 4,688 5,078 5,469 5,859 6,250 6,641 7,031

6DAE 1,875 2,344 2,813 3,281 3,750 4,219 4,688 5,156 5,625 6,094 6,563 7,031 7,500 7,969 8,438

7DAE 2,188 2,734 3,281 3,828 4,375 4,922 5,469 6,016 6,563 7,109 7,656 8,203 8,750 9,297 9,844

8DAE 2,500 3,125 3,750 4,375 5,000 5,625 6,250 6,875 7,500 8,125 8,750 9,375 10,000 10,625 11,250

Balancing SRDF/A configurations 221

Page 222: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

222

Planning an SRDF/A or SRDF/A MSC Replication Installation

◆ The primary subsystem contains faster RPM drives than the secondary subsystem.

◆ The primary subsystem contains more drives than secondary subsystem.

◆ A fan-in to the secondary subsystem, such as a couple of primary devices going into a single secondary device, will be problematic if the secondary subsystem has fewer drives or less DA “power.”

◆ The secondary subsystem has a smaller device write pending limit because the secondary subsystem has more devices, such as BCV volumes.

◆ Additional host load on the same devices containing the secondary volumes.

Balanced SRDF/A configurations

Even when the device protection scheme is the same on both the primary and secondary volumes, the secondary volume write pending limit may still be reached before the primary volume write pending limit. If the primary volume and the secondary volume configurations are similar, and if both Symmetrix systems are operating close to their maximum capability, it is possible that some event on the secondary subsystem, such as the disk adapter (DA) being temporarily busy, may cause the volume pending limit to be reached first.

Ensuring that the secondary devices are slightly faster than the primary devices is a good way to avoid the above, since it biases the secondary device write pending limit to be at least as high as, if not higher, than that of the primary device.

Another consideration is for the primary devices to be configured as RAID 5 3+1 and the secondary devices to be configured as RAID 5 7+1, and the total number of drives in each subsystem is equal. The rationale involved is that although on a system-wide scale RAID 5 3+1 and RAID 5 7+1 have the same performance (when tested with the same number of drives), one RAID 5 7+1 device can deliver higher throughput than a RAID 5 3+1 device because the I/O is spread across more drives.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 223: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Note: RAID 5 7+1 has some disadvantages in the above configuration. RAID 5 7+1 has double the risk of losing two drives in one RAID group compared to RAID 5 3+1, however, since this is a secondary volume, the risk is less significant.

RAID 5 7+1 also requires more logical volumes, which may be an issue in some cases.

Options to resolve configuration balance issues

In many cases the real solution for configuration balance issues is to ensure that the host never reaches the device write pending limits at all by eliminating the hot spots using methods such as disk striping and other similar techniques. In practice, however, this is not always easy to guarantee.

As a result of the issues specified in the previous sections, EMC recommends that the SRDF/A configurations are carefully balanced. This implies having the same protection scheme, the same number of drives, slightly faster speed drives, and same number of DAs on both the primary and secondary subsystems.

For configurations which become unbalanced only during peak periods of activity, one alternative to maintain consistency is to use an automated script. This script would be designed to split off a TimeFinder BCV/clone copy on the secondary subsystem, drop SRDF/A when the limits are reached, and then resume SRDF/A when the imbalance has ended. The primary impact of this option is an elongated RPO, resulting from the duration of the time SRDF/A was inactive, and the data continued to age on the BCV copies.

Balanced configuration summary

The fact that unbalanced configurations are not recommended does not mean that they will not function acceptably in an SRDF/A process. In most cases, when enough cache is configured, the main effect of such an unbalanced configuration is elongated cycle time, even when enough network bandwidth is configured. Unbalanced configurations may be acceptable for some period of time, at least until the workload experiences growth, and then some configurational effort needs to be expended to attain a balanced overall configuration.

Balancing SRDF/A configurations 223

Page 224: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

224

Planning an SRDF/A or SRDF/A MSC Replication Installation

Network considerationsDepending on the mode of the SRDF configuration, the Symmetrix systems can be inside the same room, inside the same building, inside different buildings, within the same campus, or hundreds (even thousands) of kilometers apart.

Distance can impact SRDF configurations in two ways: increased response time in synchronous mode, and degraded throughput, which can apply to all modes of SRDF, including SRDF/A.

One of the common cases where SRDF/A fills up the global cache is when link bandwidth is insufficient to sustain the host write throughput. A bottleneck on the link causes the global cache of the primary subsystem to be saturated. This occurs because there is no need to slow down the host writes; in some cases there may be insufficient global cache to sustain the peak write activity.

Effect of distance on throughput

The most basic concept to keep in mind about throughput at long distances is that in order to achieve maximum utilization of the link (maximum throughput), the link must be full of I/Os.

A good analogy for this would be transporting passengers using a train. The more passengers that occupy the available seats on the train, the more efficient this mode of transportation becomes. Also, the longer the train, the greater the number of passengers needed to ensure its efficiency.

Calculating the concurrent number of I/Os that guarantees a full link is accomplished using the following formula:

Concurrent number of I/Os = (link speed [MB/s] * RTT [ms] * R) / I/O size [KB]

Where:

RTT is the round trip time in ms, and

R is the number of round trips required, using Fibre RA (RF).

Note that R has a value of 2 for Enginuity codes prior to 5772.79.71, and a value of 1 for the code 5772.79.71 and later. For example: a host attached to a DMX-2, RF with a 1-GB-long distance link of 40 ms:

100 MB/s * 40 * 2 / 32 = ~250 concurrent I/Os

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 225: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

For the same example with a GigE port, or any of the link extenders having a “fast write” option (for example, CISCO, Ciena, and McData) that would result in a single round trip, you would only need 125 concurrent I/Os to fill the link and achieve maximum throughput. For additional information or questions related to link extension fast writes, refer to the EMC Symmetrix Remote Data Facility (SRDF) Connectivity Guide available on EMC Powerlink.

Note that the RTT can be calculated from the geographic distance using the speed of light formula in glass (that is, 1 ms for every 200 km). For example, a 1,000 km one-way link would have an RTT of 10 ms.

Some settings in the Symmetrix system may affect the throughput of SRDF. The most common case is an adaptive copy mode that has a setting called #CJOB. This parameter defines the maximum number of I/Os the disk adapter can add to the RA queue so that they can be sent to the secondary site. For example:

◆ #CJOB 5670 code (and lower): 40 per SRDF group

◆ #CJOB 5771 code: 80 per SRDF group

For example, if 142 concurrent I/Os are required to fill the link, a setting of 80 indicates that the expected throughput would be approximately half (80/142) of the available maximum/minimum.

Additional limits that may affect the throughput are hitting the volume limit of a maximum of 32 concurrent I/Os per volume per RA. For example, when only one volume is being copied across a long link with two RAs, such as the one in the example above, the throughput will be limited (32*2/142).

With SRDF/A, approximately 400 I/Os are sent per RA CPU, unless the Fibre or GigE flow control mechanisms kick in. If this happens, it limits the number of I/Os (to prevent slow links from being overrun with too many I/Os) depending on the speed and length of the link.

Response timeResponse time is mainly an SRDF/S mode concern, as every write to an SRDF device requires the primary Symmetrix system to send the write to the secondary Symmetrix system before the write can be acknowledged as completed to the host. Typically, this doubles the primary response time as well as adding some constant SRDF overhead.

Network considerations 225

Page 226: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

226

Planning an SRDF/A or SRDF/A MSC Replication Installation

In asynchronous modes (including adaptive copy modes), the host is acknowledged from the primary Symmetrix system and the I/Os are sent to the secondary Symmetrix system asynchronously. This is the reason asynchronous modes have a response time equal to the primary write response time.

Response time has a major effect on most host application performance. The most common example is a database. Most databases have a log volume that records every transaction. The writes to that log volume are typically sequential and single-threaded, which means no write is issued by the host until the previous write is acknowledged. In other words, the writes are dependent and heavily affected by the response time of each preceding write. In many cases the performance of the log volume inhibits the performance of the database, and dependency on response time is critical.

The writes to the table space in a database should not be synchronous in nature. In many cases they are set up to be synchronous at the host level, which makes them dependent on response time as well.

It is best to allow these writes to be asynchronous from the host, although in many cases it may require some effort. This assists the performance of the application as these writes are sometimes flushed in a single very high burst, causing long queues in the system and high response times.

SRDF Quality of Service (QoS)Under most conditions, SRDF adaptive copy disk mode operations have little or no impact on host performance. In environments with heavy I/O loads, the adaptive copy operation can affect host system performance. The Symmetrix QoS parameter enables the user to slow adaptive copy operations and minimize any performance impact from the copy process. This is especially useful during heavy SRDF synchronizing operations.

The QoS parameter inserts fixed delays between I/Os in the copy operations. The QoS value translates to milliseconds cubed per logical volume. Thus, an SRDF QoS priority of four directs a controlled delay of 64 ms between track copies per volume. Higher settings insert longer delays between I/Os; a setting of 0 (default) inserts no delay.

QoS does not affect adaptive copy write pending mode. Changing the QoS parameter from its default value slows the copy operations.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 227: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Planning for SRDF/A Delta Set ExtensionThe use of Symmetrix cache-based delta sets allows SRDF/A to ride through a temporary throughput imbalance without dropping. Using purely cache-based buffering for the delta set data works well, provided there are no SRDF/A throughput imbalances that last long enough to exhaust cache on the primary or secondary subsystems.

As described previously, a throughput imbalance exists in an SRDF/A configuration if either of the following conditions is true:

◆ The transmit operation cannot transfer data as quickly as the rate at which host applications are generating writes (adjusted to account for write folding).

◆ Writes cannot be destaged from the apply delta set fast enough to keep up with the rate that data is arriving over the remote links.

In either case, data is entering the delta sets faster than it is removed, causing the delta sets to grow in size while the imbalance exists. If an SRDF/A throughput imbalance exists long enough, the delta set sizes will eventually grow beyond what can be sent in the minimum cycle time, resulting in cycle time elongation.

If the imbalance continues, the amount of cache used by the delta sets may reach the limit that the system imposes on the amount of cache that SRDF/A is allowed to use. At that point, SRDF/A will begin dropping SRDF/A sessions according to their drop priority.

DSE can be configured for any SRDF/A session, and within any configuration in which SRDF/A is a participant, including SRDF/Star and Concurrent SRDF. DSE is designed to preserve the major benefits of SRDF/A such as:

◆ Minimal impact on host write response time

◆ Use of write folding to reduce remote link bandwidth requirements

◆ SRDF/A-provided options for managing consistency

When planning an SRDF/A configuration involving DSE, it is important to capture application host-write profiles for time intervals that include the periods of heaviest write activity. These profiles are used to establish the base requirements for the SRDF/A configuration:

Network considerations 227

Page 228: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

228

Planning an SRDF/A or SRDF/A MSC Replication Installation

◆ To prevent SRDF/A from dropping, the maximum SRDF/A throughput supported by the system must be larger than the average required SRDF/A throughput. This is always a stated requirement for SRDF/A, whether DSE is used or not. DSE allows SRDF/A to ride through transient SRDF/A throughput imbalances.

◆ There must be enough cache available to SRDF/A to avoid the need for paging operations under normal, nondegraded conditions.

In order for DSE to allow SRDF/A to ride through a transient SRDF/A throughput degradation, these base SRDF/A configuration requirements must be augmented. This entails the following additions to the base configuration:

◆ A delta set save pool

◆ Additional cache

◆ Additional SRDF/A throughput

The sizing of these additions will be discussed in detail in the following sections, but at a high level the sizing depends on:

◆ The size and duration of the largest SRDF/A throughput imbalance that needs to be covered.

◆ How quickly the configuration must return to a normal RPO following a transient SRDF/A throughput degradation.

◆ The character and rate of the host write workload.

DSE paging performance considerations

The following sections outline some of the performance aspects of the paging operations performed by DSE.

Preference not to page if possibleSRDF/A manages delta set data using only cache-based buffering when enough cache is available to SRDF/A to do so, even for SRDF/A sessions that have DSE enabled. If SRDF/A sessions do not reach their page-out threshold, the use of DSE is not expected to have a measurable performance impact.

Character of page-in disk I/O There are two types of page-in operations that DSE performs: bulk page-in and page-in using the starvation prevention mechanism.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 229: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

◆ Bulk page-in

When paged-out data exists for a session and DSE determines there is enough room in cache, it schedules bulk page-in operations for the paged data. DSE only schedules bulk page-ins when there is enough room in cache to fit all of the delta set data (including all paged-out data) in cache without violating the Page-Out Threshold.

◆ Starvation prevention page-in

The starvation prevention mechanism is invoked to page data in when the conditions for bulk page-in are not met, and the danger would otherwise exist of stalling the flow of data over the remote links, or the flow of writes to the R2 volumes because of a lack of nonpaged data in the Transmit or Apply Delta Sets, respectively.

When the system detects a danger of a stall of either of these data flows, it estimates the minimum amount of data required to prevent the stall during the interval of time that will elapse before the next time DSE evaluates the status of the system, and pages in just that amount of data.

The starvation prevention mechanism is invoked even if the Page-Out Threshold of the session is already exceeded. Page-in operations while in starvation prevention mode have a higher DA/RA CPU overhead than bulk page-in operations.

Regardless of which mode is used, full tracks are always read in, even if they were only partially populated with data.

Note: The data is only the changed data on the page-in track sent over the links and destaged to the R2 devices regardless of whether it was ever paged out.

Page-out delta set preferenceWhen the conditions are met for DSE to begin paging out delta set data from cache, the system applies a selection criteria in deciding from which delta sets it prefers to page out the data:

◆ On the primary subsystem: If page-out operations are triggered and some of the links are available, the system prefers to page out data from the capture delta set (but data from either delta set on the primary subsystem may be paged out).

Network considerations 229

Page 230: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

230

Planning an SRDF/A or SRDF/A MSC Replication Installation

◆ On the primary subsystem: If the page-out operations are triggered and no links are available, the system prefers to page out data from the transmit delta set (but data from either delta set on the primary subsystem may be paged out).

◆ On the secondary subsystem: If page-out operations are triggered, the system prefers to page out data from the receive delta set (but data from either delta set on the secondary subsystem may be paged out).

Sizing configuration additions required for DSE

The following configuration guidelines are based on the assumption that an SRDF/A session using DSE with Transmit Idle active for both primary and secondary subsystems must be able to ride through a potentially long-lasting link outage. Further, the session will remain active even if the link outage coincides with a period of heavy application host write activity.

In this context, a long-lasting link outage is one that persists for a sufficiently long time such that the SRDF/A cache utilization triggers page-out operations.

Delta set save pool requirementsThe base SRDF/A configuration must be augmented with a delta set save pool that satisfies the following requirements:

◆ The delta set save pool configuration on the primary subsystem should have at least enough storage capacity to hold the largest capture and transmit delta sets on disk. The number of track’s worth of storage capacity in the delta set save pool required to achieve this is equal to approximately twice the number of cache slots used in the largest capture delta set (resulting in a slight overestimation). If this requirement is not met, then page-out operations may fail due to pool-full conditions.

◆ The rate at which DSE can transfer data to the delta set save devices on the primary subsystem must be large enough to allow cache slots to be freed by page-out operations as quickly as the average rate that the host application writes are causing additional cache slots to be added to the capture delta set. If this condition is not met, DSE may not be able to free cache quickly enough during a long link outage to prevent reaching cache limits.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 231: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

◆ The delta set save pool storage requirements are the same for the primary and secondary subsystems.

◆ The rate that DSE must be able to free cache slots on the secondary subsystem is generally less than for the primary subsystem (as peak write rate is limited to the peak speed of remote links rather than the peak speed of hosts). To ensure that DSE can protect the configuration while failed over, the secondary DSE performance should be configured using the same criteria as that of the primary subsystem.

Additional cache requirementDuring a remote link outage, the SRDF/A cache utilization level on the primary subsystem should hover near the level that triggers page-out operations, provided that DSE is able to page out data quickly enough to prevent SRDF/A cache usage from increasing. If, while the links are down, a transient increase occurs in the host application write which causes slots to be added to the capture delta set faster than they can be freed by page-out operations, then there needs to be enough cache available to SRDF/A, beyond what is used when page-out operations are triggered, to handle the burst.

Note: The more quickly DSE can write data to the configured delta set save pool, the less additional cache is needed to survive host write bursts during a link outage.

Additional SRDF/A throughput requirementThe base amount of SRDF/A throughput required to prevent an SRDF/A imbalance in a nondegraded configuration, must be increased to ensure that there is enough spare SRDF/A throughput available to allow the system to return to a normal RPO in an acceptable amount of time following a transient SRDF/A throughput imbalance. The time required to return to a normal RPO is proportional to the amount of spare SRDF/A throughput available while the system is returning to normal RPO. This point is discussed in more detail in “Estimating RPO impact” on page 235.

Planning the delta set save device configuration

This section contains data regarding performance characterization tests that have been performed and analyzed to date.

Network considerations 231

Page 232: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

232

Planning an SRDF/A or SRDF/A MSC Replication Installation

Delta set save device layoutMany of the same considerations that apply to the design of Snap save pools apply to the design of delta set save pools. The devices in a delta set save pool should satisfy all of the following requirements:

◆ The devices should be evenly spread across all available DAs.◆ The devices should all reside on drives that have the same speed.◆ The devices should be evenly spread across all drives (of the

given speed).◆ The devices should all have the same protection type.◆ The devices should all be the same size.◆ RAID 5 and 6 are not recommended at this time.

Estimating delta set save device storage capacity requirementsThe total storage capacity of the delta set save devices must be large enough to handle the largest SRDF/A throughput imbalance. When DSE writes data to a delta set save device to free a cache slot, it always allocates a full track of space in the delta set save device, even if it was not a full track write. It should also be noted that in a given delta set, the delta set save devices contain at most one instance of a given track. If DSE pages out data for the same track more than once in a given delta set (a process known as repaging), the same space in the delta set save device is used.

The delta set save devices must have enough room to hold the largest amount of data that will ever be paged out from the two primary delta sets. The largest size that a single delta set may assume is equal to one full track of space on disk for each track accessed by a host write (regardless of whether the write was a full-track write or a partial-track write) during the longest cycle. The total number of tracks of space needed (on both the primary and secondary subsystems) is approximately twice the number of slots in the largest capture delta set.

Estimating delta set save device destage performance requirementsIn estimating the delta set save device destage performance needed, a few criteria need to be understood regarding how DSE writes to the delta set save devices:

◆ The data that DSE pages out when freeing a cache slot may be the product of one or more host writes. If the hosts perform rewrites or small block sequential writes within a given track, the associated cache slot will contain data from multiple host writes.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 233: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

In computing the rate at which cache slots are consumed in the capture delta set, each random host write always consumes a slot (or more if the random write block size is greater than a track). On the other hand, multiple sequential host writes with a block size that is less than a track may fit in a single slot. It is therefore important to know the percentage of random and sequential writes in the workload, as well as the random and sequential block sizes, and the probability of rewriting.

◆ The performance of page-out and page-in operations depends on the degree of fragmentation in the delta set save pool, which in turn depends on the number of SRDF/A sessions sharing the pool.

◆ The page-out operations performed by DSE can be normal page-out or repaging operations. The IOPS and transfer performance demands of the two types differ.

Normal pagingA page-out operation is considered normal if there is not already paged-out data for the corresponding track in the delta set in question. A normal page-out operation is always a full-track write even if only part of the track was accessed by the host application. The implications on the needed delta set save device destage performance depend on whether CKD or FBA data is being paged out.

For CKD, a normal page-out always consumes a full track of space on the save device, but only the data written by the host is transferred during the page-out operation. Records in the track that were not written to by the host are made write-pending, but are given a data length of 1 and a zero key length. This is unlike FBA page-outs where the potential for amplified bandwidth requirements is vastly less and can thus be ignored.

Normal paging operations tend to be sequential. If a delta set save pool is shared by multiple SRDF/A sessions, the potential for pool fragmentation exists. As a pool becomes more fragmented, the physical disk operations underlying even normal paging operations become more random.

RepagingA page-out operation is a repaging operation if there is already paged-out data in the delta set in question for the secondary track for which data is being paged out. In this case, DSE writes the new paged

Network considerations 233

Page 234: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

234

Planning an SRDF/A or SRDF/A MSC Replication Installation

data on top of (or merged with) the existing track of data in the delta set save device used for the previous page-out. Only the changed data is written to the delta set save device when repaging.

Repaging operations tend to be random, as changed data is written to the delta set save device.

If the host applications write to a track that already has paged data in the capture delta set, a new cache slot is allocated for the write, but DSE does not immediately update the existing track data on disk in the delta set save pool. If DSE later decides to bring the paged-out data for the track back into cache, DSE will merge the paged-out data with the new data if needed. Only if DSE is forced to page out the cache slot used by the new write, will a repage operation take place.

Deciding how many sessions to assign per delta set save poolDSE provides considerable flexibility in the way that SRDF/A sessions make use of delta set save pools. On the one hand, all SRDF/A sessions may use the same delta set save pool, and on the other, separate delta set save pools may be assigned to each SRDF/A session. Hybrid configurations include those in which some delta set save pools are shared by multiple SRDF/A sessions, and some delta set save pools serve only a single session.

Delta set save pool space is utilized most efficiently if all sessions use the same pool. However, when multiple sessions share a pool, the potential for pool space fragmentation increases. When a pool becomes fragmented, the paging operations using the pool tend to degrade because the disk I/O performed on a fragmented pool is less sequential than for an unfragmented pool.

DSE pool performance considerationsThe following information is needed to determine the IOP and transfer requirements for the delta set save pool for a given workload:

◆ Number of random host writes per second

◆ Number of sequential host writes per second

◆ Random host write block size

◆ Sequential host write block size

◆ The amount of SRDF/A throughput that is normally available

◆ The probability of the hosts performing a rewrite to a given track (other than as the result of a sequential write, since that is accounted for separately)

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 235: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Performance testing is required to determine the level of sequentiality to expect in paging I/O, and the amount of extra DA/RA CPU consumption as a result of using DSE.

Estimating RPO impact

If DSE is engaged to handle an SRDF/A throughput imbalance, cycle times and hence, RPO will be elongated. The time required to return to normal RPO is proportional to the amount of spare SRDF/A throughput available following the end of the transient SRDF/A throughput imbalance.

While an SRDF/A throughput imbalance exists, it takes longer for SRDF/A to transmit or restore a given amount of delta set data than it takes for the host applications to generate that amount of delta set data. If an imbalance exists and the system is able to perform cycle switches, the length of each cycle is the length of the previous cycle multiplied by a factor equal to the rate the host applications are generating writes (adjusted downward to account for write folding) divided by the maximum SRDF/A throughput. If the imbalance is removed and spare SRDF/A throughput becomes available, the situation reverses. That is, it takes less time for SRDF/A to transmit and restore a given amount of delta set data than it takes for the host applications to generate that amount of delta set data. Let:

L = Maximum SRDF/A throughput

H = Average host write rate during normal conditions

P = Average host write rate during imbalance

T = Minimum cycle time

The cycle length growth during imbalance is expressed in the following sequence:

T, T * (P / L), T * (P / L)2, T * (P / L)3, T * (P / L)4, … , T * (P / L) (n-1)

This shows that each cycle is longer than the previous cycle by a factor of P/L. If the imbalance ends at the end of the last cycle, and if the host-write rate returns to H, cycle time reduction will occur as follows:

(H / L) * T*(P / L)n, (H / L)2 * T*(P / L)n, (H / L)3 * T*(P / L)n, … , T

Network considerations 235

Page 236: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

236

Planning an SRDF/A or SRDF/A MSC Replication Installation

This shows that during the return to normal RPO, each cycle shortens by a factor of H/L. In the event of a remote link outage of length X, the length of the current cycle is extended by length X. When the remote links are restored, cycle lengths reduce according to the same sequence:

(H / L) * (T + X), (H / L)2 * (T + X), (H / L)3 * (T + X), … , T

The rate of cycle time growth depends on the magnitude of the SRDF/A throughput imbalance. The rate that cycle times reduce when the imbalance ends depends on the amount of spare SRDF/A throughput. The use of Delta Set Extension may be needed to keep SRDF/A from dropping, but Delta Set Extension cannot slow the rate at which cycles lengthen, or speed up the rate at which cycles shorten. If the paging performance of the Delta Set Extension configuration is suboptimal, cycles may lengthen more quickly during the imbalance, or shrink more slowly, while returning to normal RPO.

If DSE is configured such that it is not a bottleneck, the time taken to return to a normal RPO is a function of how much spare SRDF/A throughput exists. A configuration designed for a given application should have enough spare remote link bandwidth (and restore bandwidth) to allow the system to return to a normal RPO in a timely enough fashion to satisfy any maximum RPO requirements the application may have.

The following idealized example illustrates the importance of considering the amount of spare SRDF/A throughput available to enable SRDF/A to return to a normal RPO following a transient SRDF/A throughput imbalance. Consider an SRDF/A configuration in which the average rate the application hosts generate writes is 80 MB/s. Consider further, that immediately following a cycle switch (to simplify the calculations), the remote links are all lost, and that 500 seconds later the links are all restored. Finally, assume that the amount of write folding in the workload is negligible (again, just to make the calculations easier).

Table 7 on page 237 shows the time required for this example to return to a normal RPO following the restoration of the links as a function of the amount of SRDF/A throughput available to the system once the links are restored. These are predicted results based

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 237: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

on the assumption that SRDF/A does not drop, and that SRDF/A is able to take full advantage of the available SRDF/A throughput while returning to normal RPO.

The key point is that the minimum amount of time required to return to normal RPO is proportional to the amount of spare SRDF/A throughput available while SRDF/A is returning to normal RPO. If you halve the amount of spare SRDF/A throughput available, you double the amount of time required to return to normal RPO following an SRDF/A throughput imbalance.

DSE sizing example

For this DSE sizing example, assume a host workload as follows:

◆ 1,500 IOPS.

◆ 50 percent write operations, and 50 percent read operations.

◆ 70 percent of the writes are random, the rest (30 percent) are sequential.

◆ The block size for sequential and random writes is 8 KB.

◆ The probability of hosts writing to the same track more than once (except as the result of a sequential write) is negligible.

In addition to the 6 MB/s of remote link bandwidth needed to support the normal flow of data, there is 2 MB/s of remote link bandwidth available, making a total of 8 MB/s. Assume that the primary and secondary subsystems are DMX2500s each with four DAs and two Gigabit Ethernet RAs.

Table 7 Returning to normal RPO following a 500-second remote link outage

Host write rate (MB/s)

Available SRDF/A throughput (MB/s)

Amount of “spare” SRDF/A throughput (MB/s)

Time to return to normal RPO (s)

80 120 40 997

80 100 20 2,004

80 90 10 3,987

80 85 5 7,988

80 82 2 19,990

Network considerations 237

Page 238: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

238

Planning an SRDF/A or SRDF/A MSC Replication Installation

Assume that the threat of a temporary increase in application host write rate which cannot be handled in cache is small, but there is a need to ride through temporary remote link outages that are longer than what can be handled using cache-based delta set buffering only.

To keep SRDF/A from dropping during a long link outage, DSE must be able to free cache slots as quickly as hosts are adding slots to the capture delta set on the primary subsystem (R1), and as quickly as the transmit operation is adding slots to the receive delta set on the secondary subsystem (R2). Because this configuration is intended to support long outages, allowances have to be made for long periods in which page-out and starvation page-in operations are occurring concurrently. The page-out rate requirements can be used to determine the amount of DA/RA CPU time required by the DSE task. In addition, the delta set save pool configuration must be able to support the page-out and page-in I/Os.

Rate at which slots are added to capture delta set on the primary sideThe rate at which slots are added to the capture delta set can be calculated using information about the host write workload. Note that each random write requires a new slot (the example has no random rewrites); however, the sequential writes only require new slots when a new sequence begins or when a current sequence fills a track. Thus, for small block sequential host writes, the number of slots added to the capture delta set is less than the number of sequential writes. For the case of 8 KB sequential writes (with sequences that are a multiple of 8), the number of slots added to the capture delta set per second is one-eighth of the rate that the hosts are issuing the sequential writes. This is illustrated in the following calculations example:

slot addition rate = (random write rate) + (one-eighth of 8 KB sequential write rate)

Using the quantitative values discussed at the start of this section and substituting into the above equation yields the following:

R1 capture delta set slot addition rate = 1500 * 50% * 70% + (1/8)*1500 * 50% * 30%

= 525 + 28 = 553 slots/s

Rate at which slots are added to the receive delta set on the secondary subsystemThe rate at which slots are added to the receive delta set on the secondary subsystem depends on how much remote link bandwidth and RA processor capacity is available. In this example, there are two

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 239: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Gigabit Ethernet RAs and a total of 8 MB/s of link bandwidth. The workload is limited by the amount of remote link bandwidth, not the RA IOPS. The receive delta set slot addition rate is the product of the calculated rate for the capture delta set (as given earlier) and the ratio of the remote link transfer rate to the rate at which the hosts are transferring data. This is illustrated as follows:

receive delta set slot addition rate = 553 slots/s * (8 MB/s divided by 6 MB/s) = 736 slots /s

Page-out I/O workloadFor every slot freed there is a 57 KB write to the delta set save pool. Under ideal conditions (that is, no repaging and no pool fragmentation), this is a mostly sequential write workload. Both repaging and pool fragmentation cause randomization of the write workload.

In this example, the assumption is that there is no repaging. Therefore, the primary subsystem delta set save pool must accommodate 553 IOPS of 57 KB writes, and the secondary subsystem delta set save pool must accommodate 736 IOPS of 57 KB writes.

Page-in I/O workloadAt a minimum, the delta set save pool should accommodate the read load associated with the page-ins performed in starvation-prevention mode. The delta set save pool read I/O load associated with the starvation mode page-in may be estimated as follows:

The rate at which tracks are read back from the delta set save pool on the primary subsystem is the same as the rate at which tracks are added to the secondary receive delta set. In this example, there is a requirement to read back 736 tracks per second from the delta set save pool, which equates to 736 IOPS of 57 KB reads.

The rate at which tracks are read back from the delta set save pool on the secondary subsystem is paced according to the rate at which the secondary apply operation progresses. At a minimum, it should be possible to read data in from the secondary delta set save pool as quickly as it is added. Thus, the read load would be 736 IOPS of 57 KB reads.

Under ideal conditions (no pool fragmentation), this is a mostly sequential read workload. As mentioned earlier, pool fragmentation will cause randomization of the read workload.

Network considerations 239

Page 240: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

240

Planning an SRDF/A or SRDF/A MSC Replication Installation

Calculating DSE task CPU time requirementsIf there is not enough DSE task CPU time available, DSE will not be able to free slots quickly enough to prevent cache utilization from reaching the SRDF/A limit, thereby causing SRDF/A to drop (assuming no throttling). The amount of DA/RA CPU time required can be calculated using the guidelines described previously.

The DSE task CPU requirement for paging out depends on the amount of data in the slots being freed. The page-out workload for this example consists of 525 slots per second of slots containing a single 8 KB write, and 28 slots per second of fully-written slots. The number of DA CPU seconds per second required to handle this workload is:

525/540 + 28/1750 = 0.988 CPU seconds/s

With 16 DAs, the number of CPU seconds available to the DSE task per second is:

16 * 0.16 = 2.56 CPU seconds/s

Thus, the CPU time available to the DSE task on the DAs (leaving aside what the RAs provide) is more than twice what is needed to accommodate the page-out rates calculated for this example. Even with the links up (when the RAs allocate less of their time to the DSE task), and while starvation prevention page-in is taking place (which reduces the time the DSE task spends scheduling page-outs), this configuration can handle the page-out workload. Another caveat is that the DAs and RAs are not so busy with other tasks that they cannot give the DSE task its allotted CPU share.

RPO while returning to normal RPO

The previous section provided a methodology on calculating the time required to return to a normal RPO; however, there was no discussion of the shape of the RPO curve. In Figure 67 on page 241, the RPO is plotted as a function of time for a DSE configuration. At time zero, an SRDF/A throughput imbalance begins; this imbalance is caused by an increase in the host write rate which exceeds that of the remote links. After an hour, the host-write rate decreases to a value that can be accommodated by the remote links, thus ending the imbalance.

The jagged peaks in Figure 67 that are highlighted with dashed vertical lines, are the times at which the cycle switches took place. The sudden drop in RPO is due to the manner in which the delta sets

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 241: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

are applied. The delta set is not considered as applied until it has been entirely applied. Once the final byte has been applied, the RPO drops steeply.

Figure 67 RPO as a function of time during SRDF/A throughput imbalance

The application hosts begin overrunning the remote links at time 0 seconds, and then reduce to a write rate that the links can handle at time 3,630 seconds.

Note that the cycle switch occurs at time 4,000 seconds, just after the time when the hosts reduced their write rate, and that the time of peak RPO comes just after this point, and also, that the peak RPO, at time 8,000 seconds, and at time 10,000 seconds are much higher than the RPO at the time the imbalance ended. The explanation for this has to do with the fact that the amount of bandwidth available for catching up, was smaller than the amount by which the remote links were being overrun, during the imbalance, and that there were always exactly four delta sets for an SRDF/A session (two on the primary subsystem and two on the secondary subsystem).

The RPO is equal to the time that the hosts have been writing to the two delta sets on the primary subsystem. During the last cycle, before the end of the imbalance, the largest capture delta set is built up.

Time (Seconds)

RP

O (

Sec

onds

)

7000

6000

5000

4000

3000

2000

1000

00 2000 4000 6000 8000 10000 12000 14000 16000

Imbalancestarts...

Imbalanceends

ICO-IMG-000209

Network considerations 241

Page 242: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

242

Planning an SRDF/A or SRDF/A MSC Replication Installation

Following the first cycle switch, after the imbalance, the large capture delta set becomes the transmit delta set. While transmitting was in progress, there was sufficient time for the capture delta set to build up enough writes to produce an RPO larger than what existed when the imbalance ended.

Additional DSE restrictionsConsider the following restrictions.

Delta set save pool device protection typeDelta set save pool devices should use RAID 1 protection. The use of RAID 5, RAID 6, RAID 10, or metavolumes protection schemes for delta set save pool devices is currently not recommended, and in some cases, not permitted.

Page-out throughput limitationsThe rate that DSE can page-out slots depends greatly on the amount of DA/RA processor cycles that are available to the DSE task. The rate at which DSE can page data back in also depends on the amount of DA/RA processor cycles available to the DSE task; however, in practice, page-ins are much less likely to become limited by the amount of DA/RA processor cycles available to the DSE task than page-outs.

As shown in Table 8, the number of slots that can be freed per DA/RA processor second available to the DSE task depends on the quantity of host write data in the slots being paged out. The DSE task processor overhead is minimized when paging out slots that are completely filled (it does not matter if slots were filled by large block host random writes, or sequential smaller block host writes), and maximized when paging out slots containing a small amount of host write data.

Table 8 Number of DSE slots that can be scheduled

Block size Slots per DA CPU second Slots per RA CPU second

4096 540 430

8192 540 430

16384 610 440

32768 800 550

65536 1750 710

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 243: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

If the DSE task is denied sufficient access to processor cycles, the throughput of the page-out and page-in operations decreases. If Enginuity cannot page out data fast enough to keep up with an SRDF/A imbalance, and if that situation is allowed to persist for a sufficiently long period of time, SRDF/A will drop from an SRDF/A cache full condition.

Page-out throughput sizing exampleAs an example of a page-out throughput calculation, consider the following configuration:

◆ Four DAs

◆ Two RAs

◆ A workload using 16 KB random writes

The DA contribution to the maximum number of slots per second that can be paged out is calculated as:

(# of DA processors) * (DSE DA task utilization) * (page-out rate for 16 KB)

Using the DA count for this example, the DA task utilization share, and the page-out rate for 16 KB from , the calculation yields:

16 * 0.16 * 610 = 1561 slots/s

The contribution from the RAs to the maximum number of slots per second that can be paged out with the links up is calculated as:

(# of RA processors) * (DSE RA task utilization (up) * (page-out rate for 16 KB)

Using the RA count for this example, the RA task utilization share for remote links up, and the page-out rate for 16 KB from , yields:

2 * 0.25 * 440 = 220 slots/s

The contribution from the RAs to the maximum number of slots per second that can be paged out with the links down is:

(# of RA processors) * (DSE RA task utilization (down)) * (page-out rate for 16 KB)

Using the RA count for this example, the RA task utilization share for remote links down, and the page-out rate for 16 KB from , yields:

2 * 1.00 * 440 = 880 slots/s

Network considerations 243

Page 244: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

244

Planning an SRDF/A or SRDF/A MSC Replication Installation

Thus, with the links up, a maximum of 1,781 slots per second can be paged out. With the links down, a maximum of 2,441 slots per second can be paged out.

These rates are achieved only if the DAs accessing and the drives comprising the delta set save pools can handle the paging I/O load; and as a further condition, if the DAs and RAs are not so saturated that the DSE task is denied its necessary allotment of processor cycles.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 245: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Analysis toolsThis section describes the internal and external analysis tools that are used to plan and design SRDF/A.

STP Navigator/WLA Performance Manager

STP Navigator and Workload Analyzer Performance Manager are performance analysis tools for Symmetrix and host systems. They each provide the ability to quickly generate performance displays based on collected data.

Workload Analyzer is a component of the EMC ControlCenter management suite that contains the Performance Manager front end for viewing collected data. Performance Manager has the ability to view historical data (*.btp, or *.ttp files), or real-time data using a direct connection to a ControlCenter repository.

STP Navigator is a post-processing tool that is invoked after data collection has been completed; it uses the data collected to create displays of Symmetrix and host performance along with configuration data. All STP Navigator displays are presented in the display pane of the STP Navigator interface as shown in Figure 68 on page 246.

Analysis tools 245

Page 246: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

246

Planning an SRDF/A or SRDF/A MSC Replication Installation

Figure 68 Example Performance Manager/STP Navigator output

Workload analyzer agentsSymmetrix and host data available to the WLA Collection Manager is gathered from Workload Analyzer Agents (WLA Agents) that can be installed on any Windows NT or UNIX host system.

Statistics gathered by the WLA Agents are contributed by any one or more of the following data providers:

◆ Symmetrix systems that are directly connected. The WLA Agents can poll these subsystems for statistical data.

◆ Hosts on which the WLA Agent is running. The WLA Agent can poll the host on which it is running for statistical data.

◆ A network-connected, proxied host running the SYMAPI Server. A proxied host is a host running the SYMAPI Server connected to a Symmetrix system and a host running the WLA Agent. At the request of the WLA Agent, the SYMAPI Server collects

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 247: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

Symmetrix data and returns it to the WLA Agent, where it is made available to the WLA Collection Manager. The SYMAPI Server can be installed on Windows NT, UNIX, and MVS hosts with a direct connection to a Symmetrix system. To use the Symmetrix data collected by the SYMAPI Server for z/OS, the z/OS host must be connected to a WLA Agent through TCP/IP.

Analysis tools 247

Page 248: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

248

Planning an SRDF/A or SRDF/A MSC Replication Installation

EMC SRDF/A planning and design service This section discusses EMC Storage Replication Design Services.

Overview The EMC Storage Replication Design Service uses EMC business continuity experts, toolsets, and methodologies. The service begins with a review of the prospective customer’s business requirements and data center strategies. It culminates in an executive summary meeting to discuss the proposed solution and implementation strategy.

EMC captures and analyzes relevant application performance statistics to identify the risks and model the design (solution). The solution is designed to meet the desired service levels and business requirements. EMC makes recommendations as to the specific hardware, software, and networking configurations to effect the desired solution.

Designs can be generated for single- and multi-site business continuity solutions. They can support the full range of EMC storage networking technologies and storage platforms (Symmetrix and CLARiiON®). Designs can use replication software families including SRDF and MirrorView™.

EMC field personnel may be contacted for additional details regarding this service offering.

Applicability

Appropriate candidates for the EMC Storage Replication Design service are:

◆ Customers and prospects seeking better disaster recovery capabilities with multiple data center strategies.

◆ Customers and prospects seeking to exploit the sophisticated disaster recovery strategies now possible with EMC SRDF/Star, and Enginuity 5671 code.

◆ Customers and prospects lacking the in-house infrastructure expertise to specify a remote replication design.

◆ Customers and prospects who have failed a disaster recovery audit, or those facing one.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 249: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

◆ Customers and prospects needing to expand their existing replication infrastructure to include unanticipated growth, or additional applications.

The service is offered on a recurring basis to monitor replication capacity.

Service positioning

Certain storage competitors with replication software offerings also provide design services. However, EMC is the only business continuity supplier who has developed consistent processes and proven tools, to create workable designs.

Benefits realized from adopting the EMC Storage Replication Design Service include:

◆ Faster times to implementation and minimal customer impact.

◆ The tailoring of the designs to each customer’s particular environment and application workloads, rather than to artificial or generic workloads.

◆ The use of proven tools, and the expertise of best replication architects, instead of “rules-of-thumb” or “back-of-the-envelope” predictions.

◆ A guarantee that replication implementations work properly the first time, minimizing after-the-fact tuning and remedial work on the part of the customer.

◆ Presentation of an upfront storage replication design to meet business objectives and minimize implementation risks and costs:

• Unique, customer-specific workload-based designs.

• Consistent processes and proven tools to create workable designs.

• Pre-implementation critical technical requirements supporting desired service levels are identified.

• The costs of over-purchasing, or over-provisioning the infrastructure are avoided.

◆ Mitigation of risk for today’s business continuity designs by:

• Reducing the need for constant application and storage tuning to meet over-committed service levels.

EMC SRDF/A planning and design service 249

Page 250: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

250

Planning an SRDF/A or SRDF/A MSC Replication Installation

• Addressing most business continuity strategies, including single and multiple data center solutions.

◆ The inclusion of forecasted growth patterns to help customers plan for future replication requirements.

Project scope Experienced EMC Technology Solution personnel or authorized agents work closely with the customer’s or prospect’s staff to manage the EMC Storage Replication Design Service. During this engagement EMC undertakes the following:

◆ A review of key business requirements and data center strategies to determine and direct a storage replication.

◆ Collection of multiple days of I/O statistics to be used by distance replication experts for modeling the design, and identifying risks to the overall solution.

◆ The use of state-of-the-art analysis tools, along with captured application performance data, to predict needed network bandwidth, and system resources while mitigating inherent risks.

◆ Convening of an Executive Summary meeting to discuss the proposed solution (detailed in the Technical specification document), solution risk mitigation, and a proposed implementation strategy to be developed in the EMC Implementation Service.

Scope exclusionsEMC is responsible for performing only the services described in this EMC Corporation Statement of Work (SOW). Services outside the scope include, but are not limited to:

◆ Any application or host system access that encompasses coding, scripting, or application analysis.

◆ Applications outside of the tasks described in this EMC Corporation SOW.

◆ Determining business requirements such as RPO, RTO, TCO, and the numbers of data centers and data replications for each site.

◆ Any data center migration or consolidation efforts. The service may be used in conjunction with EMC Data Migration Service.

◆ Information and application-tiered storage design.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 251: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Planning an SRDF/A or SRDF/A MSC Replication Installation

◆ Storage and application performance tuning.

◆ Storage replication bandwidth and channel extension equipment procurement.

◆ Replication network troubleshooting or performance tuning or both.

◆ Primary replication design and performance tuning not applicable to remote replication method.

◆ Hardware and software equipment technology refresh or migration planning or both.

◆ Implementation planning and design and implementation activities and tasks.

EMC SRDF/A planning and design service 251

Page 252: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

252

Planning an SRDF/A or SRDF/A MSC Replication Installation

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 253: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

5

This chapter presents these topics:

◆ SRDF/A pre-implementation considerations.............................. 254◆ SRDF/A additional considerations............................................... 256◆ Software requirements and customization................................... 261◆ SRDF/A configuration overview .................................................. 268◆ DSE pool definition.......................................................................... 287◆ Establishing a Cascaded SRDF configuration.............................. 296

Implementation ofSRDF/A

Implementation of SRDF/A 253

Page 254: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

254

Implementation of SRDF/A

SRDF/A pre-implementation considerationsSRDF/A with MSC mainframe implementations require careful consideration and thorough preparation. The following are some of the many variables involved, along with some design considerations which can simplify, while guaranteeing, a robust implementation:

1. Document and raise awareness of cache, bandwidth and configuration issues early in the project.

2. Ensure that everyone on the project attends in-depth SRDF/A and MSC training.

3. Start with, and maintain the installation at current Enginuity and software patch levels.

4. Open a Software Assistance Center (SAC) case for any problems encountered by calling 1-800-EMC-4SVC (1-800-362-4782).

5. Review and gain approval for all the Configuration Design documents.

6. Recognize that even though 60 seconds of Secondary Delay is the goal, peak write periods will probably exceed 60 seconds. Prepare an action plan for this.

7. Document the workload (write %), RPO, and RTO requirements.

8. Configure link bandwidth such that the average WRITE output bandwidth on the RDF links is at least equal to the average WRITE input bandwidth from the host

9. Collect measurements using CMF or RMF reports, ECC Workload Analyzer, or Symmetrix STP data; this will help facilitate the provisioning of both the required cache and link bandwidth.

10. Wherever possible, ensure that the volumes involved in the SRDF/A relationship can be addressed as a consecutive range specifying their starting and ending Symmetrix device numbers. Commands like REFRESH, RFR-RSUM and Consistent Split operate faster when consecutive logical volume ranges are specified.

11. Factor in bursts of write I/O activity and long duration write peak periods.

12. Identify Gatekeepers for each product installed as per the Gatekeeper guidelines in “Gatekeepers” on page 257.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 255: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

13. An SRDF/A session consists of all devices defined to an SRDF/A group; once defined, all operations apply to the SRDF/A group and not the individual devices in the SRDF/A group.

14. To activate SRDF/A, all devices in the SRDF/A group must have a status of R/W-AD. SRDF/A will not activate with a device TNR-AD status.

15. When executing change commands, always follow the three golden steps as follows:

• Understand the state of the object

• Issue change against the object

• Verify that the change to the object took effect:

query change commandquery

SRDF/A pre-implementation considerations 255

Page 256: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

256

Implementation of SRDF/A

SRDF/A additional considerationsThe following is a list of key concepts that should be taken into consideration during an SRDF/A implementation.

Determine the recovery system environment

There are many factors to be considered in assessing what a recovery effort should entail. Usually, determination of what needs to be recovered, the order of recovery of applications and systems, and the type of resources needed at the recovery site are undertaken very early during the planning stages of a Disaster Recovery project.

Some of these factors on the recovery (secondary) site include:

◆ A system that can be IPLed to condition the recovery environment. Refer to Chapter 7, “SRDF/A and SRDF/A MSC Return Home Procedures,” for additional details.

◆ Channel Configuration (IOCDS considerations).

◆ Testing compared to disaster — The use of the BCVs or the R2s for either of these activities

SRDF/A link bandwidthLink bandwidth is critical for a successful SRDF/A implementation as discussed in section 4.6. Current best practices suggest an allowance for a 30 percent growth factor (over the initial estimate).

Multiple SRDF groups may be required to ensure that the long distance and high bandwidth links are sufficiently utilized during initial synchronization, and also during resynchronization operations utilizing adaptive copy disk mode.

MSC high-availability supportHigh-availability support in MSC is enabled by defining and activating “SRDF/A Multi-Session Consistency (MSC) mode” on page 146 through “SRDF/A MSC session cleanup process ” on page 162 provide greater detail on MSC; “Customization of the initialization parameters” on page 262 provides an example of defining the MSC group.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 257: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

MSC high availability support:

◆ Allows the host to control the SRDF/A cycle switching and is only relevant for SRDF/A environments. Without it, the cycle switching is done by Enginuity within the Symmetrix system.

◆ Is not required when there is only one SRDF/A group in one Symmetrix system, nor is it required when there are multiple SRDF/A groups and consistency across groups are not a requirement. However, MSC is required when there are multiple SRDF/A groups, and consistency across two or more groups is required.

◆ Provides redundancy for MSC environments by allowing another instance of MSC to perform the required cycle switch in the event that the primary MSC becomes incapable of accomplishing it.

During MSC definition, a weight factor is supplied. Refer to Chapter 3 of the Host Component product guide for a detailed description of the MSC weight factor parameter.

GatekeepersThe LPAR(s) facilitating SRDF/A’s operation requires a connection to each of the Symmetrix systems involved in the MSC consistency group definition using a Gatekeeper. A Gatekeeper is a mechanism by which access to the Symmetrix is accomplished. It can be any CKD device configured on the Symmetrix system where the primary volumes reside, and is highly recommended to be offline to the LPAR at all times to ensure that it cannot be accessed or used for any other purpose.

The Gatekeeper can be assigned to any appropriate device; a recommended size is approximately sixty cylinders (60 cyl).

Figure 69 on page 259 illustrates the number of required Gatekeepers when there are three LPARS and two Symmetrix systems involved. The rules for determining the number of required gatekeepers are as follows:

◆ 1 per SCF instance per LPAR◆ 1 per CSC instance per LPAR◆ 1 per MSC instance per LPAR◆ 1 per GNS instance per LPAR◆ 1 per ASY instance per LPAR◆ 1 per LPAR where SCF is running◆ 1 per Symmetrix system (participating in SRDF/A)

SRDF/A additional considerations 257

Page 258: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

258

Implementation of SRDF/A

The following are SRDF/A accepted guidelines for allocating Gatekeepers:

◆ The SCF, ASY, and GNS processes will use the first available device in the system for Gatekeeper processing. There is no way to specify a different device.

◆ The MSC Gatekeeper must not be specified as the first device in a frame, and must not be included in any replication process. Each MSC session must have its own unique MSC Gatekeeper device specified.

◆ The CSC Gatekeeper must not be specified as the first device in a frame in order to avoid contention with any other gatekeeper processes. However, if a CSC Gatekeeper is not specified, it will default to the first device in the frame. The CSC Gatekeeper device cannot be included in the replication process.

◆ SRDF Host Component will use the first device in a frame for heartbeat operations and can be defined as any device that is not being used as a gatekeeper for any other purpose.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 259: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

Figure 69 Symmetrix Dedicated Gatekeepers

ICO-IMG-000217

SCF

CSC

MSC

GNS

ASY

SCF

CSC

MSC

GNS

ASY

SCF 5.6 SCF 5.7

CSC GNS

MSC ASY

LPAR-1

LPAR-1

LPAR-2

Symmetrix-1 dedicatedgatekeepers (16+)

Symmetrix-2 dedicatedgatekeepers (16+)

Symmetrix-1

Symmetrix-2

Total dedicatedSymmetrix gatekeepers (32+)

System-A

System-B

OK OK

OK OK

OK

OK OK

OK OK

OK OK

OK OK

OK OK

OK

5

5

6

OK OK

OK OK

OK

OK OK

OK OK

OK OK

OK OK

OK OK

OK

5

5

6

SRDF/A additional considerations 259

Page 260: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

260

Implementation of SRDF/A

SRDF/A configurationThe SRDF/A components and configuration used throughout this chapter are outlined in Figure 70.

Figure 70 Test configuration used

R1

BCVs

R2

DMX 810UCB ADDR AA00-AA0FSYMDEV# 0140-014FGatekeepers AC70-ACDFGNSGROUP (GNS1)RDFGRP 28 & 29DSEL1 SYMDEV# 090-097F

SRDF Directors 38 and 39RDFGROUP 28 and 29

DMX 2000UCB ADDR A600-A60FSYMDEV# 0140-014FGatekeepers A870-A8DFBCVsUCB ADDR A616-A626SYMDEV# 0156-0166GNSGROUP (GNSR2)RDFGRP 28 and 29DSER1 SYMDEV# 0970-097F

ICO-IMG-000218

FICON FICONSRDF

Test Configuration

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 261: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

Software requirements and customizationThis section discusses the software requirements and the customizations that are necessary for successfully operating SRDF/A. The following topics are covered:

◆ Technical requirements and limitations

◆ Choosing appropriate Software Modules

◆ Symmetrix Control Facility (SCF) authorization codes

◆ Customization of the initialization parameters

◆ ResourcePak Base Task Startup Procedure

Technical requirements and limitationsThe following are the main requirements that must be met and adhered to as follows:

1. Current maintenance

2. Minimum software levels

3. Minimum Enginuity levels

Current maintenanceThe most current maintenance should always be applied to all products. The availability of maintenance should be checked on a monthly basis to ensure that all products meet the current maintenance levels. The software levels described below should be considered minimal, and additional software levels may be required to support the more advanced features.

Minimum software levelsThe following minimum software levels are required for SRDF/A with MSC:

◆ Version 5.5 of ResourcePak Base for OS/390 and z/OS

◆ Version 5.4 of SRDF Host Component for OS/390 and z/OS

The following software products, though optional, are also strongly recommended:

◆ Version 5.6 of the TimeFinder/Clone product set for OS/390 and z/OS

Software requirements and customization 261

Page 262: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

262

Implementation of SRDF/A

◆ Version 5.4 of the TimeFinder/Mirror product set for OS/390 and z/OS

Minimum Enginuity levelsThe following minimum Enginuity level must be met for SRDF/A with MSC:

◆ Symmetrix storage systems that are at revision level 5671.32.36 or later

Symmetrix Control Facility (SCF) authorization codes

Licensed feature code managementAs discussed in Chapter 2, EMCSCF manages licensed feature codes (LFCs) to enable separately chargeable features in EMC software. These features require an LFC to be provided during the installation and customization of the EMC SCF.

The LFCs are issued in the following format:

SCF.LFC.LCODES.LIST=wwww-xxxx-yyyy-zzzz

Where wwww-xxxx-yyyy-zzzz is a number/letter combination written on the license documentation shipped with the software. This entry tells the SCF started task that the various products are licensed for use on this equipment.

Note: The above parameters cannot be REFRESHED using the SCF modify command. The SCF started task will have to be restarted if the license codes are updated.

Customization of the initialization parameters

After loading the product libraries and applying all the applicable PTFs, proceed with the customization of the initialization parameters.

A sample of the ResourcePak Base initialization file can be found in the ResourcePak Base SAMPLIB SCFINI member. Detailed information can be found in Chapter 5 of the EMC ResourcePak Extended for z/OS Product Guide.

Following is an example of a production SCFINI parmlib member which includes SRDF/A with MSC:

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 263: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

/* ASY = SRDFA MONITOR */ SCF.ASY.MONITOR=DISABLE SCF.ASY.POLL.INTERVAL=60 SCF.ASY.SMF.RECORD=206 SCF.ASY.SMF.POLL=30SCF.ASY.SECONDARY_DELAY=60 /* CROSS SYSTEMS COMMUNICATIONS */ SCF.CSC.ACTIVE=YESSCF.CSC.EXPIRECYCLE=20*/* Group Name Services */ SCF.GNS.ACTIVE=YES */* License Codes */SCF.LCF.LCODES.LIST=WWWW.XXXX.YYYY.ZZZZ /*SRDF/A with MSC*/SCF.LOG.RETAIN.COUNT=2 SCF.LOG.RETAIN.DAYS=2 SCF.LOG.TRACKS.PRI=30 SCF.LOG.TRACKS.SEC=50 /* MSC = MULTISESSION CONTROL */ SCF.MSC.VERBOSE=NO SCF.MSC.ENABLE=YES */* SAVE POOL MONITOR DEFINITION */ SCF.SDV.LIST=ENABLE SCF.SDV.01.LIST=PERCENT=(80,90) SCF.SDV.01.LIST=DURATION=5 SCF.SDV.01.LIST=ACTION=MESSAGESCF.TRACE.MEGS=200 SCF.TRACE.RETAIN.COUNT=2 SCF.TRACE.RETAIN.DAYS=2*/* HLQ for work datasets and logs */SCF.WORK.HLQ=ICO.PROD.SSCF57P SCF.WORK.UNIT=SYSDA

SRDF/A initialization parameters are specified in SRDF Host Component using the RDFPARM ddname.

SUBSYSTEM_NAME initialization parameter is required and must be the first noncommented initialization parameter. It indicates the z/OS subsystem name specified in IEFSSNxx for use by the SRDF Host Component.

Format SUBSYSTEM_NAME=name Where: name

Indicates the name of the subsystem; it can be up to four characters.

Example SUBSYSTEM_NAME=EMC2

Software requirements and customization 263

Page 264: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

264

Implementation of SRDF/A

Command Prefix—subsystem command characterThe required COMMAND_PREFIX parameter is the prefix for all SRDF Host Component commands. This command prefix must be unique to SRDF Host Component. The COMMAND_PREFIX can be registered with the sysplex by adding the keyword REGister to the parameter value. Registering the command prefix prevents ambiguity between similar command prefixes defined for different subsystems. z/OS does not allow the registration of a command prefix that is the same as, or a subset of, an existing registered command prefix. Further, z/OS does not allow the registration of a command prefix for which an existing registered command prefix is a subset. Any attempt to do so results in an error message and an initialization failure.

Format COMMAND_PREFIX=prefixWhere: prefix

is the one to eight character prefix to be used.

COMMAND_PREFIX=# Prefix not registered with the Sysplex

COMMAND_PREFIX=#HC,REG Prefix registered with the Sysplex

Detailed information about all of the parameters can be found in Chapter 3 of the SRDF Host Component product guide.

The following is an example of a production Host Component RDFINI paramlib member that includes SRDF/A with MSC (all required parameters are italicized):

SUBSYSTEM_NAME=EMC2COMMAND_PREFIX=# PREFIX to be used for all HC commands SECURITY_QUERY=ANY SECURITY_CONFIG=ANY SAF_CLASS=EMCCLASS SAF_PROFILE=EMC.VALIDATE.ACCESS MESSAGE_PROCESSING=YES,128 MAX_QUERY=4096 MAX_ALIAS=400 MAX_COMMANDQ=200 SHOW_COMMAND_SEQ#=YES OPERATOR_VERIFY=NONE SYNCH_DIRECTION_ALLOWED=R1<>R2 SYNCH_DIRECTION_INIT=R1>R2 FBA_ENABLE=NO MESSAGE_LABELS=MVS_CUU HCLOG=ALL *******************************************************************

* MSC FOR SRDF-A

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 265: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

********************************************************************MSC_GROUP_NAME=MSC1 MULTI-SESSION GROUP MSC_INCLUDE_SESSION=ACD0,(28) SYMM WHERE GROUP IS FOUNDMSC_INCLUDE_SESSION=ACD1,(29) SYMM WHERE GROUP IS FOUND MSC_WEIGHT_FACTOR=0 HIGH AVAILIBILITY (0,1,2,3) MSC_CYCLE_TARGET=30 MIN CYCLE IN SECONDS 15-1800 MSC_GROUP_END END DEFINITION

SCF DD DUMMYMultiple instances of EMCSCF can be run as separate subsystems. This is desirable when testing new versions of EMCSCF, or EMCSCF-enabled products. It can be accomplished by adding the following DD statement to the EMCSCF test procedure:

//SCF$nnnn DD DUMMY

nnnn defines this instance of EMCSCF as a unique z/OS subsystem. The DD statement would then be used in any task from which this specific instance of EMCSCF is required.

For example:

Test version of EMCSCF:

//EMCSCF EXEC PGM=SCFMAIN,TIME=1440,REGION=0M//STEPLIB DD DISP=SHR,DSN=test.load_library//SCFINI DD DISP=SHR,DSN=init_dataset//SYSABEND DD SYSOUT=*//SCF$V570 DD DUMMY

Any task needing to use this instance of EMCSCF would add a connection DD statement.

If a version of SRDF Host Component needed to use this version of EMCSCF, the JCL for SRDF Host Component would add the DD statement as follows:

//HCTEST EXEC PGM=EMCSCF//SYSOUT DD SYSOUT=A//SYSIN DD *//SCF$V570 DD DUMMY

Software requirements and customization 265

Page 266: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

266

Implementation of SRDF/A

SUB=MSTERWhen running SRDF/A with MSC, it is recommended that SUB=MSTR be specified along with dispatch priority=SYSSTC to ensure that cycle switch windows are able to close within the allotted time.

It is recommended that the EMCSCF procedure (member EMCSCF in the SCF SAMPLIB library) be copied to a system PROCLIB that is used for started task START commands; it can then be easily customized. When other EMC mainframe applications are installed and started with SUB=MSTR, then EMCSCF should also be started with SUB=MSTR. It is recommended that the EMCSCF procedure be copied to a SYS1.PROCLIB procedure library concatenation when EMCSCF is to be started with SUB=MSTR.

ResourcePak Base (SCF) and SRDF/A startup proceduresIt is essential that ResourcePak Base (SCF) is started before Host Component (SRDF). Issue the command:

S JOBNAME (the subsystem name specified in your SCFINI parmlib member)

Depending on timing of the address spaces, SRDF may start prior to SCF. If this occurs, issue the command #SC GLOBAL,PARM_REFRESH to initiate activating SRDF/A with MSC.

Open the SCF SYSOUT and look for the following messages which are expected during ResourcePak Base Startup:

SUBSYSTEM INTERFACE ACTIVATED EMC SYMMETRIX CONTROL FACILITY VERSION 570 NOW ACTIVE

(22) MSC - TASK STARTED MSC - TASK ENABLED ASY MONITOR TASK STARTED ASY MONITOR TASK ENABLED

To start Host Component, issue the following:

S EMCRDF

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 267: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

Open the SRDF SYSOUT and look for the following messages expected during SRDF Host Component initialization:

EMC SUBSYSTEM USING COMMAND PREFIX # MESSAGE INTERFACE INITIALIZED SUBSYSTEM LOADED SRDF HOST COMPONENT V5.5.0 NOW ACCEPTING COMMANDS SRDF HOST COMPONENT V5.5.0 NOW PROCESSING COMMANDS

Software requirements and customization 267

Page 268: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

268

Implementation of SRDF/A

SRDF/A configuration overviewThe following steps need to be completed to configure an SRDF/A session:

1. Define RAs to the Symmetrix system.

The initial SRDF Group must be defined in the DMX configuration. This is completed by the EMC Customer Engineer either at the initial install time, or at a later time, using a configuration change or dynamic SRDF Group (Host Component).

2. Identify both primary and secondary STD devices.

It is necessary to understand which primary site volumes need to be replicated to the secondary site. Most customers choose to have two groups of primary volumes replicated to the secondary site.

The first contains z/OS volumes that only need to be sent to the secondary site when they have been updated. Examples of z/OS volumes are: SYSRES, page packs, temporary work volumes where &&temp data is stored, and spool. It is important to exclude datasets which must be available up to the point of failure on these volumes.

The second group of volumes contains the application datasets necessary to affect the restart of those applications at the secondary site. These need to be replicated up to the point of failure.

3. Once the volume groups have been identified, they need to be paired from the primary site to the secondary site. It is best practice to use device number ranges; however, this may not always be possible.

Non-ideal device configurations

The following device configurations are considered non-ideal because they contribute to a general imbalance between the primary and secondary subsystems:

◆ The primary (R1) devices are RAID 1 or RAID 10 and the secondary (R2) devices are RAID 5.

◆ The primary (R1) devices are RAID 10 and the secondary (R2) devices are RAID 1.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 269: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

◆ The primary (R1) devices have a higher RPM than the secondary (R2) devices.

◆ The primary subsystem has more devices than the secondary subsystem.

◆ The secondary (R2) subsystem has a lower volume WP limit than the primary (R1) subsystem as a result of BCVs or Clones being used at the secondary subsystem.

◆ The primary (R1) devices configured as RAID 5 (7+1) to secondary (R2) devices configured as RAID 5 (3+1) is not recommended.

4. Identify the R2 BCVs

Even though one set of BCV/Clones could be used for both production replication recovery and DR testing at the secondary site, the ideal scenario would entail having two sets of BCV/Clones, thereby separating production replication and DR testing activities.

Some guidelines for BCV/Clones:

◆ Try to keep the BCV/Clone physical drives and RAID protection similar on the secondary Standards because write destaging performance on the secondary subsystem is critical. At least have the same number of physical drives.

◆ Configure the secondary Standards and BCV/Clones on separate physical disks.

◆ Because the DR test is currently using the only BCVs which are needed for the golden copy during the real disaster, RTO cannot be guaranteed during DR testing if only a single set of BCV/Clones are used for both production recovery and DR testing.

◆ Do not concentrate high write activity volumes on the same physical drives or RAID set, since any one logical secondary volume hitting the write pending limit may cause the volume to go TNR which, in turn, could result in SRDF/A dropping.

◆ For batch environments with intensive write cycles, writing to the Standards and Established BCVs causes multiple destage operations for a single write. To improve secondary subsystem performance, the BCVs should be split prior to the start of batch processing, since this eliminates the write destage to the BCV

SRDF/A configuration overview 269

Page 270: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

270

Implementation of SRDF/A

volumes during peak periods. Batch jobs can be automatically submitted by the Scheduler to Split and Re-Establish the BCVs around the peak periods.

◆ BCV/Clone batch jobs can be built for the secondary recovery site; these can be used for testing and also for recovery in the event of a real disaster.

◆ For faster execution, use ranges of devices in commands such as:

SPLIT 1,RMT(8800,0000-0FFF,02),CONS(GLOBAL)

This produces faster results than using one SPLIT command for each device in the range.

RDF groups and sharing of the RDF directorsTo create a dynamic SRDF group, it is important to know which RAs connect the two Symmetrix systems together. This requires knowing the ports and DMX controller cards you will be using for this group.

Table 9 Physical Director/Port Slot to MF SW SRDF Director ID numbers

Table 9 facilitates the mapping of the Physical Director/Port Slot to the Mainframe SRDF Director ID numbers and the other way round. The 16 possible cards occupy slot numbers 0 through F. For example, card number 8 (slot# 7), and port number D would correspond to SRDF Director ID number 38. This table should be easily accessible in order to avoid ambiguity and errors during configuration, as well as assist in any subsequent troubleshooting as required.

In the examples throughout this guide, SRDF Directors 38 and 39 are used.

CARD 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

D=3 31 32 33 34 35 36 37 38 39 3A 3B 3C 3D 3E 3F 40

C=2 21 22 23 24 25 26 27 28 29 2A 2B 2C 2D 2E 2F 30

B=1 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F 20

A=0 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10

Slot# 0 1 2 3 4 5 6 7 8 9 A B C D E F

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 271: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

Creating SRDF/A device groupsThere are two ways to define the SRDF/A session device groups. The first method involves defining the groups in the Host Component thereby making them local, and restricting them such that only the defining LPAR’s SRDF tasks can access and interact with the group and its members. The specification of this type of group is included in the RDFINI ddname file and is illustrated as follows:

* */GROUP NAME DEFINITION FOR SRDF-A VOLUMES * GROUP_NAME=SRDFA-28INCLUDE_RAG=ACDF,(28) GROUP_END * GROUP_NAME=SRDFA-29INCLUDE_RAG=ACDF,(29) GROUP_END

It is possible to create a group definition and operate SRDF/A with this group definition using the RDFGRP parameter. Changes to the RDFGROUP definition do not automatically update the SRDF/A session since dynamic updates of SRDF/A sessions are currently unsupported. Consequently, after changing the RDFGROUP definition, a stop and restart of the SRDF/A session is required for the changes to take effect.

When using SRDF GROUP_NAMEs, SRDF, by default, will use the first device in the address range to handle the command. For example:

GROUP_NAME=SRDFGRPINCLUDE_CUU=AA00-AA07GROUP_END

Issuing a #SC VOL,SRDFGRP,ALL command will result in SRDF using device CUU=AA00 to handle the volume query for the subsystem. If, however, CUU=AA00 happens to be an application volume such as the CICS log file that requires adherence to a very strict I/O response time requirement, then the query could negatively impact the application’s performance. Such problems can be avoided by ensuring that the first UCB in the Symmetrix a device contains no application data. Another way is to use an RA group definition to specify the gatekeeper device to be used, as follows:

GROUP_NAME=SRDFGRPINCLUDE_RAG=AA07,(28)GROUP_END

SRDF/A configuration overview 271

Page 272: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

272

Implementation of SRDF/A

Now, issuing a #SC VOL,SRDFGRP,ALL command will result in SRDF using device CUU=AA07, which is a defined CKD gatekeeper for the query, and defaulting to CUU=AA00, which may be an application volume, does not occur.

The second type of definition is the Group Name Services (GNS) definition. This can be viewed as being global since the definition is actually kept within the Symmetrix. SRDF groups defined with the RDFGRP parameter can be updated from any connected host or server that has the EMC software capability.

Creation of SRDF/A R1/R2 pairsIt is important that the whole SRDF/A configuration is balanced; this means that the physical resources for the primary and secondary pairs are comparable. Physical devices that comprise the primary and secondary should be “matched” so that they use the same RAID protection scheme. The number of physical devices and the cache resources for each of the primary and secondary Symmetrix systems should be as close to equal as possible. Physical disks should also be the same speed.

Dynamic creation of primary and secondary pairing can be very time consuming. If multiple LPARs are available, then some parallelism can be utilized. In many cases it may be instructive to just do the pairing and not start the adaptive copy; this helps to reduce the elapsed time for the pairing. Over very long distances, pairing of devices may take several minutes per device pair.

R2 BCV initial device pairing—initial (full) synchronizationThe pairing of secondary devices to BCVs over distance will result in long elapsed times. Using the target host, if it exists, to do the pairing is more efficient and results in shorter elapsed times. The importance of the type of BCV protection scheme and number of devices being used is again emphasized. This may determine if the BCVs are always established, or just established during recovery, especially if clone or clone emulation is being used.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 273: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

Dynamic SRDF The following steps include doing a query of the SRDF links to identify which RA ports are connected to each remote Symmetrix since there may be multiple Symmetrix systems defined to specific SRDF links. This aids in knowing which Symmetrix system is configured for SRDF/A processing, based on their respective serial numbers.

Query the SRDF links to identify the ports that are connected to the SRDF/A Symmetrix systems as follows:

#SQ LINK,ACDF,E

EMCMN00I SRDF-HC : (40) #SQ LINK,ACDF,E EMCQL01I SRDF-HC EXTENDED DISPLAY FOR (40) #SQ LINK,ACDF,E 859 DR GP _OTHER__S/N_ OD OG RCS | %S M:SS RATE| %L DD:HH:MM:SS TOTAL-I/O37 SW 000190102000 .. .. FYY | .. .... ... | .. ........... ........... 38 SW 000190102000 .. .. FYY | .. .... ... | .. ........... ........... 39 SW 000190102000 .. .. FYY | .. .... ... | .. ........... ........... 3A SW 000190102000 .. .. FYY | .. .... ... | .. ........... ........... END OF DISPLAY

The above SQ Link shows that there are multiple links available for defining the RDFGroups. In this example, RAs 38 and 39, and RDFGRPs 28 and 29, have been assigned for use.

The following topology display is useful during the planning and implementation phases. It identifies the following:

◆ The Symmetrix system in use (full serial #s: 000290100810 and 000190102000).

◆ The Enginuity code level in use under the heading MC.

◆ The SCF gatekeepers listed under the heading CCUU (for example, ACD0, A8D0, and so on).

◆ The RDGGRPs in use, listed as the first two characters (digits) under the heading MHOP.

◆ The fact that the systems 000290100810 and 000190102000 are linked:

SRDF/A configuration overview 273

Page 274: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

274

Implementation of SRDF/A

F SSCF57P,DEV,DIS,TOPOLOGY

SCF0341I DEV,DIS,TOPOLOGY SCF0358I LCL SERIAL# MC CCUU MHOP ------ REMOTE SCF0358I 000290100810 5772.079 ACD0 00FF 000190102000 5772.079SCF0358I ---------------- SCF0358I 0AFF 000190102000 5772.079SCF0358I ---------------- SCF0358I 13FF 000190102000 5772.079SCF0358I ---------------- SCF0358I 18FF 000190102000 5772.079SCF0358I ---------------- SCF0358I 1DFF 000190102000 5772.079SCF0358I ---------------- SCF0358I 24FF 000190102000 5772.079SCF0358I -------------------------------- SCF0356I DEVICE DISPLAY TOPOLOGY COMMAND COMPLETED.

Creation of the RDFGRPS for the SRDF/A sessionsThe following are the commands used to create the RDFGRPS that will be used throughout the following examples:

Create RDFGRP 28:

#SC RDFGRP,ACDF,28,ADD(NO-AUTO-RCVRY),LDIR(38,39),RDIR(38,39) - LABEL(RDFG1),RSER(000190102000),RGRP(28)

EMCMN00I SRDF-HC : (11) #SC

RDFGRP,ACDF,28,ADD(NO-AUTO-RCVRY),LDIR(38,39),RDIR(38,39),LABEL(RDFG1),RSER(000190102000)-

RGRP(28)EMCGM07I COMMAND COMPLETED (CUU:ACDF)

Create RDFGRP 29:

IRDFGRP,ACDF,29,ADD(NO-AUTO-RCVRYt),LDIR(38,39),RDIR(38,39), -LABEL(RDFG2),RSER(00190102000),RGRP(29)

EMCMN00I SRDF-HC : (20) #SC RDFGRP,ACDF,29,ADD(NO- AUTORCVRY),LDIR(38,39),RDIR(38,39),LABEL(RDFG2),RSER(000190102000),-

RGRP(29) EMCGM07I COMMAND COMPLETED (CUU:ACDF)

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 275: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

Query newly defined RGFGRPs 28 and 29 locally and remotely:

#SQ RDFGRP,LCL(ACDF,28),RA(28)

EMCMN00I SRDF-HC : (67) #SQ RDFGRP,LCL(ACDF,28),RA(28) EMCQR00I SRDF-HC DISPLAY FOR (67) #SQ RDFGRP,LCL(ACDF,28),RA(28) 309 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 28 Y F 28 000190102000 5772-83 C(R1>R2) RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) MY DIR# OS RA# ST -----MY WWN----- ----IN COUNT---- ---OUT COUNT---- ------- ------ -- ---------------- ---------------- ---------------- 39 38 02 5006048C52A59298 00000000000B1720 000000000907F268 39 02 00000000001781E0 00000000092E6DC0 0000000000229900 0000000012366028 3A 39 02 5006048C52A59299 00000000000CC548 00000000092E6290 38 02 0000000000219D40 000000000908C468 00000000002E6288 00000000123726F8 END OF DISPLAY

#SQ RDFGRP,RMT(ACDF,28),RA(28) EMCMN00I SRDF-HC : (68) #SQ RDFGRP,RMT(ACDF,28),RA(28) EMCQR00I SRDF-HC DISPLAY FOR (68) #SQ RDFGRP,RMT(ACDF,28),RA(28) 314 MY SERIAL # MY MICROCODE ------------ ------------ 000190102000 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 28 Y F 28 000290100810 5772-83 C(R1>R2) RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) MY DIR# OS RA# ST -----MY WWN----- ----IN COUNT---- ---OUT COUNT---- ------- ------ -- ---------------- ---------------- ---------------- 39 38 02 5006048AD52E7C18 00000000090883E0 0000000000088660 39 02 00000000090A29D8 00000000001C7AE0 000000001212ADB8 0000000000250140 3A 38 02 5006048AD52E7C19 0000000009307970 0000000000145F78

SRDF/A configuration overview 275

Page 276: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

276

Implementation of SRDF/A

39 02 00000000092F08D0 000000000009C878 00000000125F8240 00000000001E27F0 END OF DISPLAY

#SQ RDFGRP,RMT(ACDF,29),RA(29) EMCMN00I SRDF-HC : (69) #SQ RDFGRP,RMT(ACDF,29),RA(29) EMCQR00I SRDF-HC DISPLAY FOR (69) #SQ RDFGRP,RMT(ACDF,29),RA(29) 318 MY SERIAL # MY MICROCODE

------------ ------------

000190102000 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 29 Y F 29 000290100810 5772-83 C(R1>R2) RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) MY DIR# OS RA# ST -----MY WWN----- ----IN COUNT---- ---OUT COUNT---- ------- ------ -- ---------------- ---------------- ---------------- 39 38 02 5006048AD52E7C18 000000000008A740 000000000008E968 39 02 0000000000140CA0 00000000001AAF18 00000000001CB3E0 0000000000239880 3A 38 02 5006048AD52E7C19 0000000000206700 0000000000123A30 39 02 00000000000E20F0 000000000013BD48 00000000002E87F0 000000000025F778 END OF DISPLAY

#SQ RDFGRP,LCL(ACDF,29),RA(29)

EMCMN00I SRDF-HC : (70) #SQ RDFGRP,LCL(ACDF,29),RA(29) EMCQR00I SRDF-HC DISPLAY FOR (70) #SQ RDFGRP,LCL(ACDF,29),RA(29) 325 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 29 Y F 29 000190102000 5772-83 C(R1>R2) RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) MY DIR# OS RA# ST -----MY WWN----- ----IN COUNT---- ---OUT COUNT---- ------- ------ -- ---------------- ---------------- ---------------- 39 39 02 5006048C52A59298 0000000000147118 0000000000202480 38 02 000000000009FEB8 0000000000085848 00000000001E6FD0 0000000000287CC8

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 277: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

3A 38 02 5006048C52A59299 00000000001DEDA0 000000000013B558 39 02 000000000016BEC0 00000000000DACB0 000000000034AC60 0000000000216208 END OF DISPLAY

The following is a display of all the defined SRDF Groups. The groups defined in the previous example are displayed at the bottom of this list and are labeled RDFG1 and RDFG2, respectively:

#SQ RDFGRP,ACDF

EMCCMDXT(12/10/00-15.11) ACTIVATED EMCMN00I SRDF-HC : (24) #SQ RDFGRP,ACDF EMCQR00I SRDF-HC DISPLAY FOR (24) #SQ RDFGRP,ACDF 270 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-79 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 00 Y F 00 000190102000 5772-79 G(R1>R2) GRP000 STATIC AUTO-LINKS-RECOVERY LINKS-DOMINO:NO 0A Y F 0A 000190102000 5772-79 G(R1>R2) TEST DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO 13 Y F 13 000190102000 5772-79 G(R1>R2) DEMO_RDFG DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO 18 Y F 18 000190102000 5772-79 G(R1>R2) TESTRDFG DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO 1D Y F 1D 000190102000 5772-79 G(R1>R2) DSE_RDFG DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO 24 Y F 24 000190102000 5772-79 G(R1>R2) MPRDF DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO 28 Y F 28 000190102000 5772-79 G(R1>R2) RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO 29 Y F 29 000190102000 5772-79 G(R1>R2) RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO END OF DISPLAY

SRDF/A configuration overview 277

Page 278: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

278

Implementation of SRDF/A

The following is a display of the topology. RAs 28 and 29 are now in use:

F SSCF57P,DEV,DIS,TOPOLOGY SCF0341I DEV,DIS,TOPOLOGY SCF0358I LCL SERIAL# MC CCUU MHOP ------ REMOTE SCF0358I 000290100810 5772.079 ACD0 00FF 000190102000 5772.079 SCF0358I ---------------- SCF0358I 0AFF 000190102000 5772.079 SCF0358I ---------------- SCF0358I 13FF 000190102000 5772.079 SCF0358I ---------------- SCF0358I 18FF 000190102000 5772.079 SCF0358I ---------------- SCF0358I 1DFF 000190102000 5772.079 SCF0358I ---------------- SCF0358I 24FF 000190102000 5772.079 SCF0358I ---------------- SCF0358I 28FF 000190102000 5772.079 SCF0358I ---------------- SCF0358I 29FF 000190102000 5772.079 SCF0358I -------------------------------- SCF0356I DEVICE DISPLAY TOPOLOGY COMMAND COMPLETED.

Define Group Name Services (GNS) for ease of use in commands.This following is the JCL used for a Group Name Services (GNS) definition; it defines all primary devices on subsystem 000290100810 as being associated with SRDF groups 28 and 29:

//JS10 EXEC PGM=EMCGROUP //STEPLIB DD DISP=SHR,DSN=ICO.PROD.LINKLIB //SYSPRINT DD SYSOUT=A,LRECL=133,RECFM=FBA,BLKSIZE=3990

//REPORT DD SYSOUT=A,LRECL=133,RECFM=FBA,BLKSIZE=3990

//SCF$V570 DD DUMMY //SYSIN DD * DEFINE GROUP GNS1 - INCLUDE RDF GROUP=000290100810,(LCL=28) - INCLUDE RDF GROUP=000290100810,(LCL=29)

DEFINE GROUP GNSR2 - INCLUDE RDF GROUP=000190102000,(LCL=28) - INCLUDE RDF GROUP=000190102000,(LCL=29)

/* EMCP001I DEFINE GROUP GNS1 - EMCP001I INCLUDE RDF GROUP=000290100810,(LCL=28) -

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 279: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

EMCP001I INCLUDE RDF GROUP=000290100810,(LCL=29) EGRP010I Parse complete for statement # 1 EMCP001I DEFINE GROUP GNSR2- EMCP001I INCLUDE RDF GROUP=000190102000,(LCL=28) - EMCP001I INCLUDE RDF GROUP=000190102000,(LCL=29) EGRP010I Parse complete for statement # 2 EGRP010I PARSE complete. 2 statements parsed. ***************************************************************** EGRP020I Begin Executing Statement # 1 EGRP090I DEFINE OF GROUP 'GNS1' COMPLETED WITH RETURN CODE: 0-0 REASON 0 = OK EGRP021I Processing Ended For Statement # 1 EGRP020I Begin Executing Statement # 2 EGRP090I DEFINE OF GROUP 'GNSR2' COMPLETED WITH RETURN CODE: 0-0 REASON 0 = OK EGRP021I Processing Ended For Statement # 2

Display and validate the devices (148 to 14F) before using them in the createpair operation:

#SQ VOL,AA08,8

EMCQV00I SRDF-HC DISPLAY FOR (82) #SQ VOL,AA08,8 855 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SY SYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| % AA08 08 0148 EMAA08 3339 ONPV 0 TNR-SY PL AA09 09 0149 EMAA09 3339 ONPV 0 TNR-SY PL AA0A 0A 014A EMAA0A 3339 ONPV 0 TNR-SY PL AA0B 0B 014B EMAA0B 3339 ONPV 0 TNR-SY PL AA0C 0C 014C EMAA0C 3339 ONPV 0 TNR-SY PL AA0D 0D 014D EMAA0D 3339 ONPV 0 TNR-SY PL AA0E 0E 014E EMAA0E 3339 ONPV 0 TNR-SY PLAA0F 0F 014F EMAA0F 3339 ONPV 0 TNR-SY PL END OF DISPLAY

Note: The status of the devices is shown as TNR-SY (Target Not Ready, Synchronous).

Now that the device status has been ascertained, the createpair can be executed to create the primary to secondary pairing for the RDFGroups as follows:

#SC VOL,LCL(ACDF,29),CREATEPAIR(ADCOPY-DISK),148-14F,148

EMCGM07I COMMAND COMPLETED (CUU:A900)

SRDF/A configuration overview 279

Page 280: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

280

Implementation of SRDF/A

The volumes can be queried to verify that the status has changed to Adaptive Copy (ADCOPY) as follows:

#SQ VOL,AA08,8

EMCMN00I SRDF-HC : (34) #SQ VOL,AA08,8 EMCQV00I SRDF-HC DISPLAY FOR (34) #SQ VOL,AA08,8 049 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %AA08 00 0140 0140 29 EMAA08 3339 ONPV 0 TNR-AD L1 0 1,131 93AA09 01 0141 0141 29 EMAA09 3339 ONPV 0 TNR-AD L1 0 714 95AA0A 02 0142 0142 29 EMAA0A 3339 ONPV 0 TNR-AD L1 0 1,407 91AA0B 03 0143 0143 29 EMAA0B 3339 ONPV 0 TNR-AD L1 0 810 95

Note: The status has changed from ‘-SY’ to ‘-AD’ displaying that the volume status has changed from synchronous to Adaptive Copy.

SRDF replication processing can be resumed and will cause a full synchronization between the R1s and R2s:

#SC VOL,LCL(ACDF,28),RDF-RSUM,140-147

EMCMN00I SRDF-HC : (64) #SC VOL,LCL(ACDF,28),RDF-RSUM,140-147,140-147 EMCCV1DI PROCESSING RANGE COMMAND FOR DEVICE 0140 FOR 8 DEVICES(CUU:ACDF)

#SC VOL,LCL(ACDF,29),RDF-RSUM,148-14F

EMCMN00I SRDF-HC : (64) #SC VOL,LCL(ACDF,29),RDF-RSUM,148-14F EMCCV1DI PROCESSING RANGE COMMAND FOR DEVICE 0148 FOR 8 DEVICES(CUU:ACDF)

The devices can be queried (starting with AA00 for eight devices) to verify that replication has started:

Note: The last column of the display indicates percentage replicated.

#SQ VOL,AA00,8

EMCMN00I SRDF-HC : (34) #SQ VOL,AA00,8 EMCQV00I SRDF-HC DISPLAY FOR (34) #SQ VOL,AA00,8 049 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %AA00 00 0140 0140 28 EMAA00 3339 ONPV 0 R/W-AD L1 0 131 93AA01 01 0141 0141 28 EMAA01 3339 ONPV 0 R/W-AD L1 0 714 95AA02 02 0142 0142 28 EMAA02 3339 ONPV 0 R/W-AD L1 0 407 91AA03 03 0143 0143 28 EMAA03 3339 ONPV 0 R/W-AD L1 0 810 95

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 281: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

Note: The status has changed from not ready on the link to R/W-AD (Read, Write Adaptive Copy).

Synchronization percentage can be queried by issuing the following command:

#SQ VOL,SCFG(GNS1),INV_TRKS

EMCMN00I SRDF-HC : (81) #SQ VOL,SCFG(GNS1),INV_TRKS EMCQV00I SRDF-HC DISPLAY FOR (81) #SQ VOL,SCFG(GNS1),INV_TRKS 622 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SY SYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| % AA01 01 0141 0141 28 OFLINE 3339 OFFL 0 TNR-SY L1 0 9,860 80 AA02 02 0142 0142 28 OFLINE 3339 OFFL 0 TNR-SY L1 0 9,827 80 AA03 03 0143 0143 28 OFLINE 3339 OFFL 0 TNR-SY L1 0 9,842 80 AA04 04 0144 0144 28 OFLINE 3339 OFFL 0 TNR-SY L1 0 9,797 80 AA05 05 0145 0145 28 OFLINE 3339 OFFL 0 TNR-SY L1 0 9,809 80 AA06 06 0146 0146 28 OFLINE 3339 OFFL 0 TNR-SY L1 0 9,818 80 AA07 07 0147 0147 28 OFLINE 3339 OFFL 0 TNR-SY L1 0 9,798 80 END OF DISPLAY EMCQV00I SRDF-HC DISPLAY FOR (81) #SQ VOL,SCFG(GNS1),INV_TRKS 623 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SY SYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| % AA08 08 0148 0148 29 OFLINE 3339 OFFL 0 RNR-SY L1 0 10,502 79 AA09 09 0149 0149 29 OFLINE 3339 OFFL 0 RNR-SY L1 0 10,502 79 AA0A 0A 014A 014A 29 OFLINE 3339 OFFL 0 RNR-SY L1 0 10,502 79 AA0B 0B 014B 014B 29 OFLINE 3339 OFFL 0 RNR-SY L1 0 10,502 79 AA0C 0C 014C 014C 29 OFLINE 3339 OFFL 0 RNR-SY L1 0 10,502 79 AA0D 0D 014D 014D 29 OFLINE 3339 OFFL 0 RNR-SY L1 0 10,502 79 AA0E 0E 014E 014E 29 OFLINE 3339 OFFL 0 RNR-SY L1 0 10,502 79 AA0F 0F 014F 014F 29 OFLINE 3339 OFFL 0 RNR-SY L1 0 10,502 79 END OF DISPLAY

Transition into SRDF/A mode from adaptive copy disk mode is immediate. Tracks owed to the secondary Symmetrix as a result of adaptive copy disk skew are scheduled as resynchronization operations. These are copy I/Os scheduled by the disk adapter to be serviced by SRDF/A. Each cycle switch (new capture delta set) limits the copy I/Os to 30,000 tracks or all RDF groups to avoid using all of the cache in the primary Symmetrix. Host I/Os continues to be serviced in the current SRDF/A cycles (capture delta set). The length of time to send the tracks owed with asynchronous mode depends on the number of outstanding tracks owed prior to switching to asynchronous mode along with the bandwidth available. For example, 90,000 tracks owed takes a minimum of three SRDF/A cycle switches to transmit the data. Another two cycle switches are required to ensure that the data is in the apply delta set, or the N-2 copy of data. SRDF/A produces a consistent state on the secondary

SRDF/A configuration overview 281

Page 282: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

282

Implementation of SRDF/A

Symmetrix and a dependent-write consistent copy of data after all resync operations are complete and the two additional cycle switches have occurred.

Starting SRDF/AThe following procedure should be utilized when starting SRDF/A:

1. Start SRDF/A.

2. Activate SRDF pairs into SRDF/A mode.

3. Query all SRDF/A session volumes to make sure they are in the expected state as follows:

#SQ VOL,SCFG(GNS1)

EMCMN00I SRDF-HC : (16) #SQ VOL,SCFG(GNS1) EMCQV00I SRDF-HC DISPLAY FOR (16) #SQ VOL,SCFG(GNS1) 988 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SY SYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| % AA00 00 0140 0140 28 EMAA00 3339 ONPV 0 R/W-AD L1 0 0 ** AA01 01 0141 0141 28 EMAA01 3339 ONPV 0 R/W-AD L1 0 0 ** AA02 02 0142 0142 28 EMAA02 3339 ONPV 0 R/W-AD L1 0 0 ** AA03 03 0143 0143 28 EMAA03 3339 ONPV 0 R/W-AD L1 0 0 ** AA04 04 0144 0144 28 EMAA04 3339 ONPV 0 R/W-AD L1 0 0 ** AA05 05 0145 0145 28 EMAA05 3339 ONPV 0 R/W-AD L1 0 0 ** AA06 06 0146 0146 28 EMAA06 3339 ONPV 0 R/W-AD L1 0 0 ** AA07 07 0147 0147 28 EMAA07 3339 ONPV 0 R/W-AD L1 0 0 ** END OF DISPLAYEMCQV00I SRDF-HC DISPLAY FOR (16) #SQ VOL,SCFG(GNS1) 989 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SY SYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| % AA08 08 0148 0148 29 EMAA08 3339 ONPV 0 R/W-AD L1 0 0 ** AA09 09 0149 0149 29 EMAA09 3339 ONPV 0 R/W-AD L1 0 0 ** AA0A 0A 014A 014A 29 EMAA0A 3339 ONPV 0 R/W-AD L1 0 0 ** AA0B 0B 014B 014B 29 EMAA0B 3339 ONPV 0 R/W-AD L1 0 0 ** AA0C 0C 014C 014C 29 EMAA0C 3339 ONPV 0 R/W-AD L1 0 0 ** AA0D 0D 014D 014D 29 EMAA0D 3339 ONPV 0 R/W-AD L1 0 0 ** AA0E 0E 014E 014E 29 EMAA0E 3339 ONPV 0 R/W-AD L1 0 0 ** AA0F 0F 014F 014F 29 EMAA0F 3339 ONPV 0 R/W-AD L1 0 0 **

Activate an SRDF/A session for SRDF groups 28 and 29 as follows:

#SC SRDFA,LCL(ACDF,28),ACT EMCMN00I SRDF-HC : (17) #SC SRDFA,LCL(ACDF,28),ACT EMCGM07I COMMAND COMPLETED (CUU:ACDF)

#SC SRDFA,LCL(ACDF,29),ACT

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 283: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

EMCMN00I SRDF-HC : (18) #SC SRDFA,LCL(ACDF,29),ACT EMCGM07I COMMAND COMPLETED (CUU:ACDF)

Display the SRDF/A sessions on both the primary and secondary subsystems and check for SRDF/A session in consistent state:

#SQ SRDFA,LCL(ACDF,29)

EMCMN00I SRDF-HC : (19) #SQ SRDFA,LCL(ACDF,29) EMCQR00I SRDF-HC DISPLAY FOR (19) #SQ SRDFA,LCL(ACDF,29) 036 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 29 Y F 29 000190102000 5772-79 G(R1>R2) SRDFA ACTIVE RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO ---------------------------------------------------------------------- PRIMARY SIDE: CYCLE NUMBER 3 MIN CYCLE TIME 30 SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 0 TRANSMIT CYCLE SIZE 0 AVERAGE CYCLE TIME 60 AVERAGE CYCLE SIZE 0 TIME SINCE LAST CYCLE SWITCH 28 DURATION OF LAST CYCLE 30 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 HA WRITES 0 RPTD HA WRITES 0 HA DUP. SLOTS 0 SECONDARY DELAY 58 LAST CYCLE SIZE 0 DROP PRIORITY 33 CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( N ) ---------------------------------------------------------------------- END OF DISPLAY

Display the SRDF/A session on the secondary subsystem:

#SQ SRDFA,RMT(ACDF,29)

EMCMN00I SRDF-HC : (20) #SQ SRDFA,RMT(ACDF,29) EMCQR00I SRDF-HC DISPLAY FOR (20) #SQ SRDFA,RMT(ACDF,29) 040 MY SERIAL # MY MICROCODE ------------ ------------ 000190102000 5772-79 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 29 Y F 29 000290100810 5772-83 G(R1>R2) SRDFA ACTIVE

SRDF/A configuration overview 283

Page 284: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

284

Implementation of SRDF/A

RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO ---------------------------------------------------------------------- SECONDARY SIDE: CYCLE NUMBER 5 CYCLE SUSPENDED ( N ) RESTORE DONE ( Y ) RECEIVE CYCLE SIZE 0 APPLY CYCLE SIZE 0 AVERAGE CYCLE TIME 37 AVERAGE CYCLE SIZE 0 TIME SINCE LAST CYCLE SWITCH 10 DURATION OF LAST CYCLE 29 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 TOTAL RESTORES 0 TOTAL MERGES 0 SECONDARY DELAY 39 DROP PRIORITY 33 CLEANUP RUNNING ( N ) HOST INTERVENTION REQUIRED ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( N ) ---------------------------------------------------------------------- END OF DISPLAY

Activate Transmit IdleKey facts regarding the mainframe Transmit Idle interface are as follows:

◆ Transmit Idle can be set on or off using the SC SRDFA command.

◆ To determine if Transmit Idle feature is enabled, issue the SQ SRDFA command.

◆ The SQ SRDFA command indicates when you are in the Transmit Idle status (that is, SRDF/A is active, SRDF/A devices are ready on the link; however, the link is down).

◆ MSC prevents the startup of an MSC group if one or more SRDF/A Groups are in Transmit Idle status.

◆ MSC displays Transmit Idle status if an SRDF/A Group experiences a temporary link loss of all links.

◆ MSC appears as normal when exiting Transmit Idle status after one of the links has resumed.

◆ The normal DROP command does not work when the status is Transmit Idle.

◆ A new SC SRDFA DROP_SIDE command is added to SRDF HC.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 285: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

You can perform the following operations with the mainframe interface:

Turn Transmit Idle on as follows:

#SC SRDFA,LCL(ACDF,28),TRANSMIT_IDLE,ON

EMCMN00I SRDF-HC : (34) #SC SRDFA,LCL(ACDF,28),TRANSMIT_IDLE,ON EMCGM07I COMMAND COMPLETED (CUU:ACDF)

Query transmit_idle on as follows:

#SQ SRDFA,LCL(ACDF,28) EMCMN00I SRDF-HC : (35) #SQ SRDFA,LCL(ACDF,28) EMCQR00I SRDF-HC DISPLAY FOR (35) #SQ SRDFA,LCL(ACDF,28) 689 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 28 Y F 28 000190102000 5772-79 G(R1>R2) SRDFA ACTIVE RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO ----------------------------------------------------------------------PRIMARY SIDE: CYCLE NUMBER 11,388 MIN CYCLE TIME 30SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 3,696 TRANSMIT CYCLE SIZE 0AVERAGE CYCLE TIME 31 AVERAGE CYCLE SIZE 17,514TIME SINCE LAST CYCLE SWITCH 6 DURATION OF LAST CYCLE 30MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94HA WRITES 232,843,459 RPTD HA WRITES 110,941,015HA DUP. SLOTS 16,635,054 SECONDARY DELAY 36LAST CYCLE SIZE 23,027 DROP PRIORITY 33CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( N )

Set transmit_idle on for the secondary subsystem as follows:

#SC SRDFA,RMT(ACDF,28),TRANSMIT_IDLE,ON

EMCMN00I SRDF-HC : (36) #SC SRDFA,RMT(ACDF,28),TRANSMIT_IDLE,ON EMCGM07I COMMAND COMPLETED (CUU:ACDF)

Display the secondary SRDF/A session to validate that Transmit Idle is on:

#SQ SRDFA,RMT(ACDF,28)

SRDF/A configuration overview 285

Page 286: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

286

Implementation of SRDF/A

EMCMN00I SRDF-HC : (37) #SQ SRDFA,RMT(ACDF,28) EMCQR00I SRDF-HC DISPLAY FOR (37) #SQ SRDFA,RMT(ACDF,28) 796 MY SERIAL # MY MICROCODE ------------ ------------ 000190102000 5772-79 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 28 Y F 28 000290100810 5772-83 G(R1>R2) SRDFA ACTIVE RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO ----------------------------------------------------------------------SECONDARY SIDE: CYCLE NUMBER 11,394 CYCLE SUSPENDED ( N ) RESTORE DONE ( N ) RECEIVE CYCLE SIZE 12,735 APPLY CYCLE SIZE 0AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 17,721TIME SINCE LAST CYCLE SWITCH 4 DURATION OF LAST CYCLE 30MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94TOTAL RESTORES 122,568,553 TOTAL MERGES 22,708,479SECONDARY DELAY 34 DROP PRIORITY 33CLEANUP RUNNING ( N ) HOST INTERVENTION REQUIRED ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( N ) ----------------------------------------------------------------------END OF DISPLAY

Table 10 lists the possible values for the Transmit Idle indicator.

Table 10 Transmit Idle indicator values

Value Description

BLANK Not SRDF/A at this time

SRDFA ACTIVE SRDF/A is active on the RDFGRP at this time

SRDFA INACT SRDF/A is in a transitional state. Once cleanup is done we will no longer be SRDF/A Active

SRDFA A MSC An active SRDF/A RDFGRP running in MSC

SRDFA I MSC An inactive SRDF/A RDFGRP that was running in MSC

SRDFA A STAR An active SRDF/A RDFGRP this is running in both MSC and STAR

SRDFA I STAR An inactive SRDF/A RDFGRP that was running in MSC and STAR

STAR A RDFGRP defined to a STAR definition that is not running SRDF/A

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 287: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

DSE pool definitionIn the following examples DSEL1 and DSEL2 will be the designation used for the primary DSE pools, and DSER1 and DSER2 for the secondary.

The following JCL is used to invoke the POOL manipulation utility:

//QCOPYRUN EXEC PGM=ESFGPMBT,REGION=4M

//STEPLIB DD DISP=SHR,DSN=ICO.PROD.LINKLIB

//GPMPRINT DD SYSOUT=*

//SCF$V570 DD DUMMY

//GPMINPUT DD *

In the following, the first command displays the default pool, the second creates the pool, and the third adds devices to the newly created pool. As an ongoing best practice the pool is displayed again:

CONFIGPOOL DISPLAY(TARGET(UNIT(ACDF)) TYPE(SNAPPOOL) - POOL(DEFAULT_POOL))

CONFIGPOOL CREATE (TARGET(UNIT(ACDF)) - POOL('DSEL1') - TYPE(DSEPOOL)) CONFIGPOOL ADD (TARGET(UNIT(ACDF)) - POOL('DSEL1') - TYPE(DSEPOOL) - DEV(0970-097F) - MEMBERSTATE(ENABLE)) CONFIGPOOL DISPLAY(TARGET(UNIT(ACDF)) TYPE(DSEPOOL) - POOL('DSEL1'))

/*

The following is the output from the display of the default pool. The devices added to the local dsepool (DSEL1) are inactive and available for use:

EMCU005I PROCESSING COMMAND:CONFIGPOOL DISPLAY(TARGET(UNIT(ACDF)) -TYPE(SNAPPOOL) POOL(DEFAULT_POOL))

EMCU013I LOGPOOL DEVICE INFORMATION FOR LOGPOOL - DEFAULT_POOL EMCU014I -DEVICE- -STATUS- TYPE --USED-- --FREE-- -DRAIN?- EMCU015I 0791 ACTIVE FBA 0 138105 NO EMCU015I 0792 ACTIVE FBA 0 138105 NO EMCU015I 0970 INACTIVE FBA 0 138105 NO EMCU015I 0971 INACTIVE FBA 0 138105 NO EMCU015I 0972 INACTIVE FBA 0 138105 NO

DSE pool definition 287

Page 288: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

288

Implementation of SRDF/A

EMCU015I 0973 INACTIVE FBA 0 138105 NO EMCU015I 0974 INACTIVE FBA 0 138105 NO EMCU015I 0975 INACTIVE FBA 0 138105 NO EMCU015I 0976 INACTIVE FBA 0 138105 NO EMCU015I 0977 INACTIVE FBA 0 138105 NO EMCU015I 0978 INACTIVE FBA 0 138105 NO EMCU015I 0979 INACTIVE FBA 0 138105 NO EMCU015I 097A INACTIVE FBA 0 138105 NO EMCU015I 097B INACTIVE FBA 0 138105 NO EMCU015I 097C INACTIVE FBA 0 138105 NO EMCU015I 097D INACTIVE FBA 0 138105 NO EMCU015I 097E INACTIVE FBA 0 138105 NO EMCU015I 097F INACTIVE FBA 0 138105 NO EMCU006I COMMAND PROCESSED SUCCESSFULLY

EMCU005I PROCESSING COMMAND:CONFIGPOOL CREATE (TARGET(UNIT(ACDF)) - POOL('DSEL1') - TYPE(DSEPOOL)) EMCU006I COMMAND PROCESSED SUCCESSFULLY EMCU005I PROCESSING COMMAND:CONFIGPOOL ADD (TARGET(UNIT(ACDF)) - POOL('DSEL1') - TYPE(DSEPOOL) - DEV(0970-097F) - MEMBERSTATE(ENABLE)) EMCU006I COMMAND PROCESSED SUCCESSFULLY EMCU005I PROCESSING COMMAND:CONFIGPOOL DISPLAY(TARGET(UNIT(ACDF)) -TYPE(DSEPOOL)

POOL('DSEL1')) EMCU013I LOGPOOL DEVICE INFORMATION FOR LOGPOOL - DSEL1 EMCU014I -DEVICE- -STATUS- TYPE --USED-- --FREE-- -DRAIN?- EMCU015I 0970 ACTIVE 3390 0 50145 NO EMCU015I 0971 ACTIVE 3390 0 50145 NO EMCU015I 0972 ACTIVE 3390 0 50145 NO EMCU015I 0973 ACTIVE 3390 0 50145 NO EMCU015I 0974 ACTIVE 3390 0 50145 NO EMCU015I 0975 ACTIVE 3390 0 50145 NO EMCU015I 0976 ACTIVE 3390 0 50145 NO EMCU015I 0977 ACTIVE 3390 0 50145 NO EMCU015I 0978 ACTIVE 3390 0 50145 NO EMCU015I 0979 ACTIVE 3390 0 50145 NO EMCU015I 097A ACTIVE 3390 0 50145 NO EMCU015I 097B ACTIVE 3390 0 50145 NO EMCU015I 097C ACTIVE 3390 0 50145 NO EMCU015I 097D ACTIVE 3390 0 50145 NO EMCU015I 097E ACTIVE 3390 0 50145 NO EMCU015I 097F ACTIVE 3390 0 50145 NO EMCU006I COMMAND PROCESSED SUCCESSFULLY EMCU008I END OF COMMANDS FILE REACHED

The devices (0970-097F) now show as active in the newly created DSE pool DSEL1.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 289: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

The following is the command stream for the secondary or remote DSE pool (DSER1):

CONFIGPOOL DISPLAY(REMOTE(UNIT(ACDF) -

RAGROUP(28) CONTROLLER(02000)) TYPE(DSEPOOL) -

POOL(DEFAULT_POOL)) CONFIGPOOL CREATE ( - REMOTE(UNIT(ACDF) - RAGROUP(28) CONTROLLER(02000)) -

POOL('DSER1') - TYPE(DSEPOOL)) CONFIGPOOL ADD ( - REMOTE(UNIT(ACDF) - RAGROUP(28) CONTROLLER(02000)) -

POOL('DSER1') - TYPE(DSEPOOL) - DEV(0970-097F) - MEMBERSTATE(ENABLE)) CONFIGPOOL DISPLAY( - REMOTE(UNIT(ACDF) - RAGROUP(28) CONTROLLER(02000)) -

TYPE(DSEPOOL) - POOL(DSER1))

/*

The following is the output from the execution of the creation of the secondary DSE pool:

EMCU005I PROCESSING COMMAND:CONFIGPOOL DISPLAY(REMOTE(UNIT(ACDF) - RAGROUP(28) CONTROLLER(02000)) TYPE(DSEPOOL) - POOL(DEFAULT_POOL))

EMCU013I LOGPOOL SEVICE INFORMATION FOR LOGPOOL - DEFAULT_POOL EMCU014I -DEVICE- -STATUS- TYPE --USED-- --FREE-- -DRAIN?- EMCU015I 00000790 ACTIVE FBA 0 138105 NO EMCU015I 00000791 ACTIVE FBA 0 138105 NO EMCU015I 00000792 ACTIVE FBA 0 138105 NO EMCU015I 00000793 ACTIVE FBA 0 138105 NO EMCU015I 00000794 ACTIVE FBA 0 138105 NO EMCU015I 00000795 ACTIVE FBA 0 138105 NO EMCU015I 00000796 ACTIVE FBA 0 138105 NO EMCU015I 00000797 ACTIVE FBA 0 138105 NO EMCU015I 000010C5 INACTIVE FBA 0 64515 NO EMCU006I COMMAND PROCESSED SUCCESSFULLY EMCU005I PROCESSING COMMAND:CONFIGPOOL CREATE ( - REMOTE(UNIT(ACDF) - RAGROUP(28) CONTROLLER(02000)) - POOL('DSER1') -

DSE pool definition 289

Page 290: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

290

Implementation of SRDF/A

TYPE(DSEPOOL)) EMCU006I COMMAND PROCESSED SUCCESSFULLY EMCU005I PROCESSING COMMAND:CONFIGPOOL ADD ( - REMOTE(UNIT(ACDF) - RAGROUP(28) CONTROLLER(02000)) - POOL('DSER1') - TYPE(DSEPOOL) - DEV(0970-097F) MEMBERSTATE(ENABLE)) EMCU006I COMMAND PROCESSED SUCCESSFULLY EMCU005I PROCESSING COMMAND:CONFIGPOOL DISPLAY( REMOTE(UNIT(ACDF) - RAGROUP(28) CONTROLLER(02000)) - TYPE(DSEPOOL) - POOL(DSER1)) EMCU013I LOGPOOL SEVICE INFORMATION FOR LOGPOOL - DSER1 EMCU014I -DEVICE- -STATUS- TYPE --USED-- --FREE-- -DRAIN?- EMCU015I 00000970 ACTIVE 3390 0 50145 NO EMCU015I 00000971 ACTIVE 3390 0 50145 NO EMCU015I 00000972 ACTIVE 3390 0 50145 NO EMCU015I 00000973 ACTIVE 3390 0 50145 NO EMCU015I 00000974 ACTIVE 3390 0 50145 NO EMCU015I 00000975 ACTIVE 3390 0 50145 NO EMCU015I 00000976 ACTIVE 3390 0 50145 NO EMCU015I 00000977 ACTIVE 3390 0 50145 NO EMCU015I 00000978 ACTIVE 3390 0 50145 NO EMCU015I 00000979 ACTIVE 3390 0 50145 NO EMCU015I 0000097A ACTIVE 3390 0 50145 NO EMCU015I 0000097B ACTIVE 3390 0 50145 NO EMCU015I 0000097C ACTIVE 3390 0 50145 NO EMCU015I 0000097D ACTIVE 3390 0 50145 NO EMCU015I 0000097E ACTIVE 3390 0 50145 NOEMCU015I 0000097F ACTIVE 3390 0 50145 NO EMCU006I COMMAND

PROCESSED SUCCESSFULLY EMCU008I END OF COMMANDS FILE REACHED

Note: For additional information on the creation of DSE pools see Chapter 9 of the EMC ResourcePak Base for z/OS Version 5.7 Product Guide.

Activate DSE

The key facts regarding mainframe Delta Set Extension interface are as follows:

◆ DSE can be set on or off using the SC SRDFA command.

◆ To determine if DSE feature is enabled, use the SQ SRDFA command.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 291: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

◆ The SQ SRDFA command indicates when you are in the DSE status.

◆ MSC prevents startup of an MSC group if one or more RDFGRPs are in Transmit Idle status.

◆ MSC displays Transmit Idle status if there is a temporary link loss.

◆ MSC appears as normal when exiting Transmit Idle status after one of the links resume.

◆ The normal DROP command does not work when the status is Transmit Idle.

◆ A new SC SRDFA DROP_SIDE command is added to SRDF HC.

You can perform the Activate DSE operation with the Host Component interface.

Execute the command for the primary and secondary DSE SRDF/A sessions:

#SC SRDFA_DSE,LCL(ACDF,28),3390_POOL,P(DSEL1)

EMCMN00I SRDF-HC : (48) #SC SRDFA_DSE,LCL(ACDF,28),3390_POOL,P(DSEL1), CQNAME=(EMC24812235234)

#SC SRDFA_DSE,RMT(ACDF,28),3390_POOL,P(DSER1)

EMCMN00I SRDF-HC : (49) #SC SRDFA_DSE,RMT(ACDF,28),3390_POOL,P(DSER1), CQNAME=(EMC24812235234) EMCGM07I COMMAND COMPLETED (CUU:ACDF)

The following example illustrates how to set the threshold level for DSE activity:

#SC SRDFA_DSE,LCL(ACDF,28),THRESHOLD,40

EMCMN00I SRDF-HC : (50) #SC SRDFA_DSE,LCL(ACDF,28),THRESHOLD,40,CQNAME=(EMC24812235234) EMCGM07I COMMAND COMPLETED (CUU:ACDF)

#SC SRDFA_DSE,RMT(ACDF,28),THRESHOLD,40 EMCMN00I SRDF-HC : (51) #SC SRDFA_DSE,RMT(ACDF,28),THRESHOLD,40, CQNAME=(EMC24812235234) EMCGM07I COMMAND COMPLETED (CUU:ACDF)

#SQ SRDFA,LCL(ACDF,28) EMCMN00I SRDF-HC : (52) #SQ SRDFA,LCL(ACDF,28),

DSE pool definition 291

Page 292: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

292

Implementation of SRDF/A

CQNAME=(EMC24812235234) EMCGM07I COMMAND COMPLETED (CUU:ACDF)

MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 28 Y F 28 000190102000 5772-79 G(R1>R2) SRDFA ACTIVE RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO --------------------------------------------------------------------PRIMARY SIDE: CYCLE NUMBER 11,419 MIN CYCLE TIME 30SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 16,122 TRANSMIT CYCLE SIZE 0AVERAGE CYCLE TIME 31 AVERAGE CYCLE SIZE 16,892------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- --------- 28 Y F 28 000190102000 5772-79 G(R1>R2) SRDFA ACTIVE RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO ----------------------------------------------------------------------PRIMARY SIDE: CYCLE NUMBER 11,419 MIN CYCLE TIME 30SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 16,122 TRANSMIT CYCLE SIZE 0AVERAGE CYCLE TIME 31 AVERAGE CYCLE SIZE 16,892TIME SINCE LAST CYCLE SWITCH 15 DURATION OF LAST CYCLE 31MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 HA WRITES 234,614,783 RPTD HA WRITES 111,595,049 HA DUP. SLOTS 16,738,983 SECONDARY DELAY 46 LAST CYCLE SIZE 25,662 DROP PRIORITY 33 CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( N ) ---------------------------------------------------------------------- END OF DISPLAY

#SQ SRDFA,RMT(ACDF,28)

EMCMN00I SRDF-HC : (53) #SQ SRDFA,RMT(ACDF,28),CQNAME=(EMC24812235234) EMCQR00I SRDF-HC DISPLAY FOR (52) #SQ SRDFA,LCL(ACDF,28) 053

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 293: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

#SQ SRDFA_DSE,RMT(ACDF,28)

EMCMN00I SRDF-HC: (54)#SQ SRDFA_DSE,RMT(ACDF,28),CQNAME=(EMC24812235234)

EMCQR00I SRDF-HC DISPLAY FOR (53) #SQ SRDFA,RMT(ACDF,28) 055 MY SERIAL # MY MICROCODE ------------ ------------ 000190102000 5772-79 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 HA WRITES 234,614,783 RPTD HA WRITES 111,595,049 HA DUP. SLOTS 16,738,983 SECONDARY DELAY 46 LAST CYCLE SIZE 25,662 DROP PRIORITY 33 CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( N ) ---------------------------------------------------------------------- END OF DISPLAY

Display secondary DSE activation:

#SQ SRDFA_DSE,RMT(ACDF,28)

EMCMN00I SRDF-HC:(54) #SQ SRDFA_DSE,RMT(ACDF,28),CQNAME=(EMC24812235234)

EMCQR00I SRDF-HC DISPLAY FOR (53) #SQ SRDFA,RMT(ACDF,28) 055 MY SERIAL # MY MICROCODE ------------ ------------ 000190102000 5772-79 . . . . . . . . . . . . . . . . . . . . . . . . MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 28 Y F 28 000290100810 5772-83 G(R1>R2) SRDFA ACTIVE RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO ----------------------------------------------------------------------SECONDARY SIDE: CYCLE NUMBER 11,418 CYCLE SUSPENDED ( N ) RESTORE DONE ( Y ) RECEIVE CYCLE SIZE 25,666 APPLY CYCLE SIZE 0AVERAGE CYCLE TIME 31 AVERAGE CYCLE SIZE 17,178TIME SINCE LAST CYCLE SWITCH 16 DURATION OF LAST CYCLE 31MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94TOTAL RESTORES 123,751,179 TOTAL MERGES 22,970,336SECONDARY DELAY 47 DROP PRIORITY 33CLEANUP RUNNING ( N ) HOST INTERVENTION REQUIRED ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( N ) ---------------------------------------------------------------------- END OF DISPLAY

DSE pool definition 293

Page 294: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

294

Implementation of SRDF/A

Display the DSE pools for the primary and secondary:

#SQ SRDFA_DSE,LCL(ACDF,28)

EMCMN00I SRDF-HC:(55)#SQ SRDFA_DSE,LCL(ACDF,28),CQNAME=(EMC24812235234)

EMCQR00I SRDF-HC DISPLAY FOR (54) #SQ SRDFA_DSE,RMT(ACDF,28) 057 MY SERIAL # MY MICROCODE ------------ ------------ 000190102000 5772-79 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 28 Y F 28 000290100810 5772-83 G(R1>R2) SRDFA ACTIVE RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO ---------------------------------------------------------------------- SECONDARY SIDE: CYCLE NUMBER 11,418 SRDFA DSE ACTIVE ( N ) THRESHOLD PERCENTAGE 40 SRDFA DSE AUTO ACTIVATE ( N ) RECEIVE CYCLE SIZE 25,666 APPLY CYCLE SIZE 0 DSE USED TRACKS 0 DSE USED TRACKS 0 DSE MDATA TRACKS 0 DSE MDATA TRACKS 0 ---------------------------------------------------------------------- FBA POOL NAME 3390 POOL NAME DSER1 AS400 POOL NAME 3380 POOL NAME ---------------------------------------------------------------------- END OF DISPLAY

Display DSE for the secondary SRDF/A session:

EMCQR00I SRDF-HC DISPLAY FOR (55) #SQ SRDFA_DSE,LCL(ACDF,28) 058 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 28 Y F 28 000190102000 5772-79 G(R1>R2) SRDFA ACTIVE RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO ----------------------------------------------------------------------PRIMARY SIDE: CYCLE NUMBER 11,419 SRDFA DSE ACTIVE ( N ) THRESHOLD PERCENTAGE 40 SRDFA DSE AUTO ACTIVATE ( N ) CAPTURE CYCLE SIZE 17,287 TRANSMIT CYCLE SIZE 0DSE USED TRACKS 0 DSE USED TRACKS 0DSE MDATA TRACKS 0 DSE MDATA TRACKS 0

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 295: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

----------------------------------------------------------------------FBA POOL NAME 3390 POOL NAME DSEL1 AS400 POOL NAME 3380 POOL NAME ----------------------------------------------------------------------END OF DISPLAY

DSE pool definition 295

Page 296: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

296

Implementation of SRDF/A

Establishing a Cascaded SRDF configurationThis section discusses establishing a Cascaded SRDF configuration.

Process overview

Once the required SRDF groups have been established, setting up cascaded replication is a two-step process, which can be performed in any order:

1. Create the initial R1 R2 pair between workload site A and secondary site B, or first hop (or) between secondary site B and tertiary site C (second hop), alternatively.

2. Set up the R1 R2 pair between secondary site B and tertiary site C, or second hop (or) between workload site A and secondary site B (first hop, alternatively.

Cascaded replication example

This section pertains to the z/OS Cascaded SRDF interface and is included for introductory purposes only. It is not intended to replace EMC product specific documentation. For additional details on the Cascaded SRDF interface for z/OS, please refer to the EMC SRDF Host Component for z/OS Version 5.6 Product Guide (P/N 300-000-163) available on Powerlink.

In the following examples we will create the required SRDF groups, perform device pairing and various queries to determine the state of the environment.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 297: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

1. The first step to establish a default cascaded SRDF environment is to create an SRDF group from workload site A to secondary site B for the first hop:

Figure 71 Create an SRDF group from workload site A to secondary site B for the first hop

ICO-IMG-000414

R1 R2R21

9E00(serial # 000190103387)

4200(serial # 000190100849)

9A00(serial # 000190300344)

0428

38

28

38

44

ICO-IMG-000418

Establishing a Cascaded SRDF configuration 297

Page 298: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

298

Implementation of SRDF/A

2. The next step is to create an SRDF group between secondary site B and tertiary site C for the second hop:

Figure 72 Create SRDF group between workload sites B and C for second hop

ICO-IMG-000413

R1 R2R21

9E00(serial # 000190103387)

4200(serial # 000190100849)

9A00(serial # 000190300344)

0428

38

28

3854

40

6444

ICO-IMG-000419

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 299: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

3. Once the first and second hop SRDF groups have been created, performing an SQ LINK command will result in the following:

Establishing a Cascaded SRDF configuration 299

Page 300: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

300

Implementation of SRDF/A

4. Once the SRDF groups have been created and verified, identify volumes to be paired between workload sites A and secondary site B for the first hop:

5. Once the volumes to be paired have been identified, create device pairs between workload site A and secondary site B for the first hop:

Figure 73 Create device pairs between workload sites A and B for the first hop

ICO-IMG-000416

R1 R2R21

9E00(serial # 000190103387)

4200(serial # 000190100849)

9A00(serial # 000190300344)

0428

38

28

3854

40

6444

devs: 50-57 devs: B0-B7 devs: A0-A7

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 301: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

6. Once the first hop has been completed, identify volumes to be paired between secondary site B and tertiary site C for the second hop:

Figure 74 Volumes to be paired between sites B and C for the second hop

ICO-IMG-000417

R1 R2R21

9E00(serial # 000190103387)

4200(serial # 000190100849)

9A00(serial # 000190300344)

0428

38

28

3854

40

6444

devs: 50-57 devs: B0-B7 devs: A0-A7

SRDF/S

Establishing a Cascaded SRDF configuration 301

Page 302: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

302

Implementation of SRDF/A

7. Once the second hop volumes have been chosen, create device pairs between secondary site B and tertiary site C for the first hop. However, performing a typical CREATEPAIR will result in an error.

Note: CREATEPAIR will default to synchronous mode, which is not a valid mode for the second hop of a Cascaded SRDF relationship.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 303: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

8. With this restriction in mind, create device pairs between secondary site B and tertiary site C for the secondary hop specifying Adaptive Copy disk as the mode of operation:

Figure 75 Create device pairs between sites B and C specifying Adaptive Copy

ICO-IMG-000415

R1 R2R21

9E00(serial # 000190103387)

4200(serial # 000190100849)

9A00(serial # 000190300344)

0428

38

28

3854

40

6444SRDF/S SRDF ADCOPY-DISK

devs: 50-57 devs: B0-B7 devs: A0-A7

Establishing a Cascaded SRDF configuration 303

Page 304: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

304

Implementation of SRDF/A

Performing a concurrent SRDF SQ VOL for comparison purposes results in the following output:

There have also been changes to RMT in the SQ VOL command to address access to the second hop:

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 305: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Implementation of SRDF/A

The SRDF/A leg may now be activated and the second hop queried to validate the expected mode of operation (please note the -c immediately after the SRDF group number indicates that the group is participating in a cascaded relationship):

Similarly, the output of the SQ VOL command also has been changed to indicate a cascaded SRDF relationship (CAS in the status field indicates a cascaded SRDF relationship):

Establishing a Cascaded SRDF configuration 305

Page 306: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

306

Implementation of SRDF/A

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 307: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

6

This chapter presents these topics:

◆ Resuming SRDF/A after normal termination or temporary link failure ................................................................................................. 308

◆ All links are lost................................................................................ 321◆ Perform BCV split of the R2s and BCVs on the R2 side ............. 326◆ Reestablish BCVs ............................................................................. 334

Basic SRDF/AOperations

Basic SRDF/A Operations 307

Page 308: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

308

Basic SRDF/A Operations

Resuming SRDF/A after normal termination or temporary link failure

The following examples show the outcome and recovery process for two different SRDF/A deactivation scenarios:

◆ PEND_DROP — SRDF/A will wait until the end of the SRDF/A cycle, and then do a DROP. The primary site will have R2 invalid tracks. The secondary site will not have R1 invalid tracks. A consistent copy will exist on the secondary side.

◆ Link Failure — SRDF/A will DROP the sessions immediately and make all the devices not ready on the links. The primary site will have R2 invalid tracks. The secondary site will have R1 invalid tracks. A consistent copy will exist on the secondary side:

• The recovery routines for a link failure are the same as those for an outage caused by the global cache limits being exceeded.

Note: The output from SCF will be different from that of a production environment due to the debug(verbose)=on parameter being set for illustration purposes.

Recovery procedure from a PEND_DROP actionThe following steps are taken to perform a PEND_DROP, and those steps are subsequently followed by additional steps needed to restart SRDF/A MSC.

A PEND_DROP is performed through the MSC address space so that the recovery process is as simple and straightforward as possible. This type of process should be used to perform a Disaster Recovery test or any other type of validation at the recovery site, and also to stop SRDF/A if there are error conditions occurring that may result in a hard drop of SRDF/A.

Query SRDF/A session to verify that it is active and in MSC#SQ SRDFA,LCL(ACDF,29)

EMCMN00I SRDF-HC : (75) #SQ SRDFA,LCL(ACDF,29) EMCQR00I SRDF-HC DISPLAY FOR (75) #SQ SRDFA,LCL(ACDF,29) 504 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 309: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Basic SRDF/A Operations

MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 29 Y F 29 000190102000 5772-79 G(R1>R2) SRDFA A MSC RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) ----------------------------------------------------------------------PRIMARY SIDE: CYCLE NUMBER 11,502 MIN CYCLE TIME 30SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 20,183 TRANSMIT CYCLE SIZE 0AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 17,891TIME SINCE LAST CYCLE SWITCH 19 DURATION OF LAST CYCLE 31MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94HA WRITES 239,607,367 RPTD HA WRITES 113,457,324HA DUP. SLOTS 17,085,488 SECONDARY DELAY 50LAST CYCLE SIZE 12,001 DROP PRIORITY 33CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( N ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( Y ) ACTIVE SINCE 09/05/2007 13:03:37CAPTURE TAG C0000000 0000000C TRANSMIT TAG C0000000 0000000BGLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) ----------------------------------------------------------------------END OF DISPLAY

Query the Volumes to verify status#SQ VOL,SCFG(GNS1)

EMCQV00I SRDF-HC DISPLAY FOR (77) #SQ VOL,SCFG(GNS1) 530 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SY SYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| % AA00 00 0140 0140 28 EMAA00 3339 OAPV 1 R/W-AS L1 0 0 **AA01 01 0141 0141 28 EMAA01 3339 OAPV 1 R/W-AS L1 0 0 **AA02 02 0142 0142 28 EMAA02 3339 OAPV 1 R/W-AS L1 0 0 ** AA03 03 0143 0143 28 EMAA03 3339 OAPV 1 R/W-AS L1 0 0 **AA04 04 0144 0144 28 EMAA04 3339 OAPV 1 R/W-AS L1 0 0 **AA05 05 0145 0145 28 EMAA05 3339 OAPV 1 R/W-AS L1 0 0 **AA06 06 0146 0146 28 EMAA06 3339 OAPV 1 R/W-AS L1 0 0 **AA07 07 0147 0147 28 EMAA07 3339 OAPV 1 R/W-AS L1 0 0 **END OF DISPLAY

Note: The control unit status of R/W-AS states that the volumes are in SRDF/A mode.

Resuming SRDF/A after normal termination or temporary link failure 309

Page 310: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

310

Basic SRDF/A Operations

Issue the PEND_DROP command through SCFF SSCF57P,MSC PENDDROP

MSC,PENDDROP MSC - PENDDROP COMMAND ACCEPTED. MSC - GROUP=MSC1 TIME OF DAY FOR CYCLE 0000000D IS 13:09:50MSC - GROUP=MSC1 TIME OF DAY FOR CYCLE 0000000E IS 13:10:21MSC - GROUP=MSC1 (ACD0,28) SRDFA IS NOT ACTIVE4 MSC - GROUP=MSC1 (ACD1,29) SRDFA IS NOT ACTIVE4 MSC - GROUP=MSC1 (ACD1,29) HOST CLEANUP INVOKED MSC - GROUP=MSC1 HOST CLEANUP IS RUNNING MSC - GROUP=MSC1 (ACD0,28) SRDFA IS NOT ACTIVE4 MSC - GROUP=MSC1 (ACD1,29) SRDFA IS NOT ACTIVE4 MSC - GROUP=MSC1 HOST CLEANUP - PHASE2 IS RUNNING MSC - GROUP=MSC1 HOST CLEANUP CASE2 RUNNING MSC - GROUP=MSC1 (ACD1,29) PROCESS_FC10-DISCARD INACTIVE CYCLEMSC - GROUP=MSC1 (ACD0,28) PROCESS_FC10-DISCARD INACTIVE CYCLEMSC - GROUP=MSC1 HOST CLEANUP IS FINISHED

Note: During the automatic cleanup process being done by MSC, CASE2 was run. For further explanation of the MSC cleanup process please refer to the appropriate SRDF product guide.

Query source volumes to display status #SQ VOL,SCFG(GNS1)

EMCQV00I SRDF-HC DISPLAY FOR (81) #SQ VOL,SCFG(GNS1) 820 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %AA00 00 0140 0140 28 EMAA00 3339 OAPV 1 TNR-SY L1 0 4,483 91AA01 01 0141 0141 28 EMAA01 3339 OAPV 1 TNR-SY L1 0 4,483 91AA02 02 0142 0142 28 EMAA02 3339 OAPV 1 TNR-SY L1 0 4,483 91AA03 03 0143 0143 28 EMAA03 3339 OAPV 1 TNR-SY L1 0 4,483 91AA04 04 0144 0144 28 EMAA04 3339 OAPV 1 TNR-SY L1 0 4,483 91AA05 05 0145 0145 28 EMAA05 3339 OAPV 1 TNR-SY L1 0 4,483 91AA06 06 0146 0146 28 EMAA06 3339 OAPV 1 TNR-SY L1 0 4,483 91AA07 07 0147 0147 28 EMAA07 3339 OAPV 1 TNR-SY L1 0 4,483 91END OF DISPLAY EMCQV00I SRDF-HC DISPLAY FOR (81) #SQ VOL,SCFG(GNS1) 821 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %AA08 08 0148 0148 29 EMAA08 3339 OAPV 1 TNR-SY L1 0 4,483 91AA09 09 0149 0149 29 EMAA09 3339 OAPV 1 TNR-SY L1 0 4,483 91AA0A 0A 014A 014A 29 EMAA0A 3339 OAPV 1 TNR-SY L1 0 4,483 91AA0B 0B 014B 014B 29 EMAA0B 3339 OAPV 1 TNR-SY L1 0 4,483 91AA0C 0C 014C 014C 29 EMAA0C 3339 OAPV 1 TNR-SY L1 0 4,483 91AA0D 0D 014D 014D 29 EMAA0D 3339 OAPV 1 TNR-SY L1 0 4,483 91AA0E 0E 014E 014E 29 EMAA0E 3339 OAPV 1 TNR-SY L1 0 4,483 91AA0F 0F 014F 014F 29 EMAA0F 3339 OAPV 1 TNR-SY L1 0 4,483 91END OF DISPLAY

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 311: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Basic SRDF/A Operations

Note: The volumes are in TNR-SY status and have R2 Invalid Tracks. The default mode that SRDF/A drops into after a deactivation is synchronous.

Query SRDF/A to get status #SQ SRDFA,SCFG(GNS1)

EMCMN00I SRDF-HC : (80) #SQ SRDFA,SCFG(GNS1) EMCQR06E QUERY FOR SRDFA - SRDFA NOT FOUND (CUU:A900) EMCQR06E QUERY FOR SRDFA - SRDFA NOT FOUND (CUU:A900)

Note: SRDF/A is not found due to the actions caused by the PENDDROP.

Recovery process to reactivate SRDF/A after a PENDDROPThis process must be carried out on all SRDF/A sessions in the MSC_GROUP. This is the output of one of the SRDF/A sessions. The other session was carried out, but is not displayed here.

Split off the BCVs to save a gold copySplit the BCVs from the R2s to preserve a gold copy (for potential restart) if the SRDF/A restart process fails:

//EMCTF EXEC PGM=EMCTF,REGION=6500K //STEPLIB DD DSN=ICO.PROD.LINKLIB,DISP=SHR //SCF$V570 DD DUMMY //SYSOUT DD SYSOUT=X //SYSIN DD * GLOBAL WAIT,MAXRC(4) QUERY 01,RMT(ACD5,29),16,156 SPLIT 02,RMT(ACD5,156-165,29) QUERY 03,RMT(ACD5,29),16,156BCVI020I Start of INPUT control statement(s) from SYSINBCVI018I (0001) GLOBAL WAIT,MAXRC(4) BCVI018I (0002) QUERY 01,RMT(ACD5,29),8,156 BCVI018I (0003) SPLIT 02,RMT(ACD5,156-15E,29) BCVI018I (0004) QUERY 03,RMT(ACD5,29),8,156 BCVI021I End of INPUT control statement(s) from SYSIN

BCVM039I (0002) Process input statementBCVM004I QUERY status through device ACD5, MICRO-CODE level 5x72 type SYM7, S/N

000190102000BCVM003I ...BCV... ...STD... ACTION LAST PROT

MIRROR BCV

Resuming SRDF/A after normal termination or temporary link failure 311

Page 312: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

312

Basic SRDF/A Operations

BCVM003I CUU SYM# CUU SYM# ITRK-BCV ITRK-STD STATUS USED BCV EMUL #CYLS TYPE SYNC MODE

BCVM003I ---- 0156r ---- 0140r 0 0 INUSE EST 0156 3390 3339 MIRR RD5/CLONE

BCVM003I ---- 0157r ---- 0141r 0 0 INUSE EST 0157 3390 3339 MIRR RD5/CLONE

BCVM003I ---- 0158r ---- 0142r 0 0 INUSE EST 0158 3390 3339 MIRR RD5/CLONE

BCVM003I ---- 0159r ---- 0143r 0 0 INUSE EST 0159 3390 3339 MIRR RD5/CLONE

BCVM003I ---- 015Ar ---- 0144r 0 0 INUSE EST 015A 3390 3339 MIRR RD5/CLONE

BCVM003I ---- 015Br ---- 0145r 0 0 INUSE EST 015B 3390 3339 MIRR RD5/CLONE

BCVM003I ---- 015Cr ---- 0146r 0 0 INUSE EST 015C 3390 3339 MIRR RD5/CLONE

BCVM003I ---- 015Dr ---- 0147r 0 0 INUSE EST 015D 3390 3339 MIRR RD5/CLONE

BCVM039I (0003) Process input statementBCVM004I INSTANT SPLIT REMOTE BCV SYMDEV 0156 through ACD5BCVM140I Command processed via TF/Clone emulation

BCVM039I (0003) Process input statementBCVM004I INSTANT SPLIT REMOTE BCV SYMDEV 0157 through ACD5BCVM140I Command processed via TF/Clone emulation

BCVM039I (0003) Process input statementBCVM004I INSTANT SPLIT REMOTE BCV SYMDEV 0158 through ACD5BCVM140I Command processed via TF/Clone emulation

BCVM039I (0003) Process input statementBCVM004I INSTANT SPLIT REMOTE BCV SYMDEV 0159 through ACD5BCVM140I Command processed via TF/Clone emulation

BCVM039I (0003) Process input statementBCVM004I INSTANT SPLIT REMOTE BCV SYMDEV 015A through ACD5BCVM140I Command processed via TF/Clone emulation

BCVM039I (0003) Process input statementBCVM004I INSTANT SPLIT REMOTE BCV SYMDEV 015B through ACD5BCVM140I Command processed via TF/Clone emulation

BCVM039I (0003) Process input statementBCVM004I INSTANT SPLIT REMOTE BCV SYMDEV 015C through ACD5BCVM140I Command processed via TF/Clone emulation

BCVM039I (0003) Process input statementBCVM004I INSTANT SPLIT REMOTE BCV SYMDEV 015D through ACD5BCVM140I Command processed via TF/Clone emulation

BCVM039I (0003) Process input statement

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 313: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Basic SRDF/A Operations

BCVM004I INSTANT SPLIT REMOTE BCV SYMDEV 015E through ACD5BCVM140I Command processed via TF/Clone emulation

BCVM039I (0004) Process input statementBCVM004I QUERY status through device ACD5, MICRO-CODE level 5x72 type SYM7, S/N

0001901-02000BCVM003I ...BCV... ...STD... ACTION LAST PROT

MIRROR BCVBCVM003I CUU SYM# CUU SYM# ITRK-BCV ITRK-STD STATUS USED BCV EMUL #CYLS

TYPE SYNC MODEBCVM003I ---- 0156r ---- 0140r 0 0 AVAIL 0156 3390 3339

MIRR YES RD5/CLONEBCVM003I ---- 0157r ---- 0141r 0 0 AVAIL 0157 3390 3339

MIRR YES RD5/CLONEBCVM003I ---- 0158r ---- 0142r 0 0 AVAIL 0158 3390 3339

MIRR YES RD5/CLONEBCVM003I ---- 0159r ---- 0143r 0 0 AVAIL 0159 3390 3339

MIRR YES RD5/CLONEBCVM003I ---- 015Ar ---- 0144r 0 0 AVAIL 015A 3390 3339

MIRR YES RD5/CLONEBCVM003I ---- 015Br ---- 0145r 0 0 AVAIL 015B 3390 3339

MIRR YES RD5/CLONEBCVM003I ---- 015Cr ---- 0146r 0 0 AVAIL 015C 3390 3339

MIRR YES RD5/CLONEBCVM003I ---- 015Dr ---- 0147r 0 0 AVAIL 015D 3390 3339

MIRR YES RD5/CLONEBCVM047I All control statements processed, highest RC 0AA0F 0F 014F 014F 29 EMAA0F 3339 OAPV 1 TNR-AS L1 0 2,709 94END OF DISPLAY

Set volumes to ADCOPY-Disk mode#SC VOL,SCFG(GNS1),ADCOPY-DISK

EMCMN00I SRDF-HC : (82) #SC VOL,SCFG(GNS1),ADCOPY-DISKEMCGM07I COMMAND COMPLETED (CUU:A900)EMCGM07I COMMAND COMPLETED (CUU:A900)

Query the volumes to verify status change to ADCOPY#SQ VOL,SCFG(GNS1)

EMCMN00I SRDF-HC : (83) #SQ VOL,SCFG(GNS1) EMCQV00I SRDF-HC DISPLAY FOR (83) #SQ VOL,SCFG(GNS1) 856 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %AA00 00 0140 0140 28 EMAA00 3339 OAPV 1 TNR-AD L1 0 4,483 91AA01 01 0141 0141 28 EMAA01 3339 OAPV 1 TNR-AD L1 0 4,483 91AA02 02 0142 0142 28 EMAA02 3339 OAPV 1 TNR-AD L1 0 4,483 91AA03 03 0143 0143 28 EMAA03 3339 OAPV 1 TNR-AD L1 0 4,483 91AA04 04 0144 0144 28 EMAA04 3339 OAPV 1 TNR-AD L1 0 4,483 91AA05 05 0145 0145 28 EMAA05 3339 OAPV 1 TNR-AD L1 0 4,483 91AA06 06 0146 0146 28 EMAA06 3339 OAPV 1 TNR-AD L1 0 4,483 91

Resuming SRDF/A after normal termination or temporary link failure 313

Page 314: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

314

Basic SRDF/A Operations

AA07 07 0147 0147 28 EMAA07 3339 OAPV 1 TNR-AD L1 0 4,483 91END OF DISPLAY EMCQV00I SRDF-HC DISPLAY FOR (83) #SQ VOL,SCFG(GNS1) 857 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %AA08 08 0148 0148 29 EMAA08 3339 OAPV 1 TNR-AD L1 0 4,483 91AA09 09 0149 0149 29 EMAA09 3339 OAPV 1 TNR-AD L1 0 4,483 91AA0A 0A 014A 014A 29 EMAA0A 3339 OAPV 1 TNR-AD L1 0 4,483 91AA0B 0B 014B 014B 29 EMAA0B 3339 OAPV 1 TNR-AD L1 0 4,483 91AA0C 0C 014C 014C 29 EMAA0C 3339 OAPV 1 TNR-AD L1 0 4,483 91AA0D 0D 014D 014D 29 EMAA0D 3339 OAPV 1 TNR-AD L1 0 4,483 91AA0E 0E 014E 014E 29 EMAA0E 3339 OAPV 1 TNR-AD L1 0 4,483 91AA0F 0F 014F 014F 29 EMAA0F 3339 OAPV 1 TNR-AD L1 0 4,483 91END OF DISPLAY

Note: The status has changed from ‘-SY’ to ‘-AD’ stating that the volume status has changed from synchronous to Adaptive Copy.

Resume SRDF replication processing:

#SC VOL,SCFG(GNS1),RDF-RSUM

EMCMN00I SRDF-HC : (84) #SC VOL,SCFG(GNS1),RDF-RSUM EMCGM07I COMMAND COMPLETED (CUU:A900)EMCGM07I COMMAND COMPLETED (CUU:A900)

Query the volumes to verify that the replication started: #SQ VOL,SCFG(GNS1)

EMCMN00I SRDF-HC : (85) #SQ VOL,SCFG(GNS1) EMCQV00I SRDF-HC DISPLAY FOR (85) #SQ VOL,SCFG(GNS1) 884 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %AA00 00 0140 0140 28 EMAA00 3339 OAPV 1 R/W-AD L1 0 50 99AA01 01 0141 0141 28 EMAA01 3339 OAPV 1 R/W-AD L1 0 2,306 95AA02 02 0142 0142 28 EMAA02 3339 OAPV 1 R/W-AD L1 0 52 99AA03 03 0143 0143 28 EMAA03 3339 OAPV 1 R/W-AD L1 0 690 98AA04 04 0144 0144 28 EMAA04 3339 OAPV 1 R/W-AD L1 0 1,323 97AA05 05 0145 0145 28 EMAA05 3339 OAPV 1 R/W-AD L1 0 1,174 97AA06 06 0146 0146 28 EMAA06 3339 OAPV 1 R/W-AD L1 0 51 99AA07 07 0147 0147 28 EMAA07 3339 OAPV 1 R/W-AD L1 0 605 98END OF DISPLAY EMCQV00I SRDF-HC DISPLAY FOR (85) #SQ VOL,SCFG(GNS1) 885 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %AA08 08 0148 0148 29 EMAA08 3339 OAPV 1 R/W-AD L1 0 74 99AA09 09 0149 0149 29 EMAA09 3339 OAPV 1 R/W-AD L1 0 3,616 92AA0A 0A 014A 014A 29 EMAA0A 3339 OAPV 1 R/W-AD L1 0 79 99AA0B 0B 014B 014B 29 EMAA0B 3339 OAPV 1 R/W-AD L1 0 3,371 93AA0C 0C 014C 014C 29 EMAA0C 3339 OAPV 1 R/W-AD L1 0 87 99AA0D 0D 014D 014D 29 EMAA0D 3339 OAPV 1 R/W-AD L1 0 2,441 95AA0E 0E 014E 014E 29 EMAA0E 3339 OAPV 1 R/W-AD L1 0 470 99

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 315: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Basic SRDF/A Operations

AA0F 0F 014F 014F 29 EMAA0F 3339 OAPV 1 R/W-AD L1 0 2,983 94END OF DISPLAY

Note: The status has changed from not ready on the link to R/W-AD, stating that the replication process has restarted.

Activate SRDF/A#SC SRDFA,LCL(ACDF,28),ACT

EMCMN00I SRDF-HC : (89) #SC SRDFA,LCL(ACDF,28),ACTEMCGM07I COMMAND COMPLETED (CUU:ACDF)

SC SRDFA,LCL(ACDF,29),ACT EMCMN00I SRDF-HC : (89) #SC SRDFA,LCL(ACDF,29),ACTEMCGM07I COMMAND COMPLETED (CUU:ACDF)

Query SRDF/A to validate activation #SQ SRDFA,SCFG(GNS1)

EMCMN00I SRDF-HC : (91) #SQ SRDFA,SCFG(GNS1) EMCQR00I SRDF-HC DISPLAY FOR (91) #SQ SRDFA,SCFG(GNS1) 937 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 28 Y F 28 000190102000 5772-79 G(R1>R2) SRDFA ACTIVE RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) ---------------------------------------------------------------------- PRIMARY SIDE: CYCLE NUMBER 7 MIN CYCLE TIME 30 SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 2,199 TRANSMIT CYCLE SIZE 0 AVERAGE CYCLE TIME 36 AVERAGE CYCLE SIZE 18,589 TIME SINCE LAST CYCLE SWITCH 8 DURATION OF LAST CYCLE 30 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 HA WRITES 240,131,947 RPTD HA WRITES 113,683,730 HA DUP. SLOTS 17,117,606 SECONDARY DELAY 38 LAST CYCLE SIZE 11,411 DROP PRIORITY 33 CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( Y ) MSC ACTIVE ( N ) ---------------------------------------------------------------------- END OF DISPLAY EMCQR00I SRDF-HC DISPLAY FOR (91) #SQ SRDFA,SCFG(GNS1) 938 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83

Resuming SRDF/A after normal termination or temporary link failure 315

Page 316: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

316

Basic SRDF/A Operations

MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 29 Y F 29 000190102000 5772-79 G(R1>R2) SRDFA ACTIVE RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) ---------------------------------------------------------------------- PRIMARY SIDE: CYCLE NUMBER 7 MIN CYCLE TIME 30 SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 87 TRANSMIT CYCLE SIZE 9,342 AVERAGE CYCLE TIME 36 AVERAGE CYCLE SIZE 19,751 TIME SINCE LAST CYCLE SWITCH 0 DURATION OF LAST CYCLE 30 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 HA WRITES 240,131,947 RPTD HA WRITES 113,683,730 HA DUP. SLOTS 17,117,606 SECONDARY DELAY 30 LAST CYCLE SIZE 10,407 DROP PRIORITY 33 CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( N ) SRDFA DSE ACTIVE ( Y ) MSC ACTIVE ( N ) ---------------------------------------------------------------------- END OF DISPLAY

Refresh MSC for activationF SSCF57P,MSC REFRESH

SCF1390I MSC REFRESH SCF1391I MSC - REFRESH COMMAND ACCEPTED. SCF1557I MSC - GROUP=MSC1 WAITING FOR INITIALIZATIONSCF1321I MSC - TASK DISABLEDSCF1320I MSC - TASK ENABLED EMCMB0EI MSC_GROUP_NAME= MSC1 HAS PASSED VALIDATION EMCMB0FI MSC HAS POSTED SCF WITH NEW DEFINITION(S) SCF1568I MSC - GROUP=MSC1 WEIGHT FACTOR = 0

Reload MSC with SRDF/A init deck#SC GLOBAL,PARM_REFRESH

EMCMN00I SRDF-HC : (73) #SC GLOBAL,PARM_REFRESH EMCPS03I REFRESH COMPLETE, STATISTICS FOR ADDED DEVICES FOLLOW EMCPS00I SSID(S): 34 TOTAL DEV(S): 6,237 SUPPORTED DEV(S): 37

F SSCF57P,MSC,REFRESH SCF1390I MSC,REFRESH SCF1391I MSC - REFRESH COMMAND ACCEPTED. SCF1321I MSC - TASK DISABLED SCF1320I MSC - TASK ENABLED EMCMB0EI MSC_GROUP_NAME= MSC1 HAS PASSED VALIDATIONEMCMB0FI MSC HAS POSTED SCF WITH NEW DEFINITION(S)

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 317: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Basic SRDF/A Operations

SCF1568I MSC - GROUP=MSC1 WEIGHT FACTOR = 0F SSCF57P,MSC,REFRESH SCF0069I SSCF57P(S0004361) Registration Lock released, Holdtime 10, CUUACD1, CNTRL 0002901-00810 SCF1342I MSC - GROUP=MSC1 PROCESS_FC03-ALL BOXES ACTIVE

SCF1523I MSC - GROUP=MSC1 GLOBAL CONSISTENCY HAS BEEN ACHIEVED

SCF1564I MSC - GROUP=MSC1 TIME OF DAY FOR CYCLE 00000001 IS 13:03:43.86

SCF0069I Registration Lock released, Holdtime 10,CUU ACD1,CNTRL0002901-00810 SCF1342I MSC - GROUP=MSC1 PROCESS_FC03-ALL BOXES ACTIVE

SCF1523I MSC - GROUP=MSC1 GLOBAL CONSISTENCY HAS BEEN ACHIEVED SCF1564I MSC - GROUP=MSC1 TIME OF DAY FOR CYCLE 00000001 IS 13:03:43.86

Query SRDF/A to verify MSC activation#SQ SRDFA,LCL(ACDF,28)

EMCMN00I SRDF-HC : (74) #SQ SRDFA,LCL(ACDF,28) EMCQR00I SRDF-HC DISPLAY FOR (74) #SQ SRDFA,LCL(ACDF,28) 490 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 28 Y F 28 000190102000 5772-79 G(R1>R2) SRDFA A MSC RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) ----------------------------------------------------------------------PRIMARY SIDE: CYCLE NUMBER 11,502 MIN CYCLE TIME 30SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 5,958 TRANSMIT CYCLE SIZE 0AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 16,592TIME SINCE LAST CYCLE SWITCH 11 DURATION OF LAST CYCLE 29MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94HA WRITES 239,236,978 RPTD HA WRITES 113,324,506HA DUP. SLOTS 17,058,485 SECONDARY DELAY 40LAST CYCLE SIZE 12,497 DROP PRIORITY 33CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( Y ) ACTIVE SINCE 09/05/2007 13:03:37CAPTURE TAG C0000000 00000006 TRANSMIT TAG C0000000 00000005GLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) ----------------------------------------------------------------------END OF DISPLAY

Resuming SRDF/A after normal termination or temporary link failure 317

Page 318: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

318

Basic SRDF/A Operations

Query of RDFGRP 29#SQ SRDFA,LCL(ACDF,29)

EMCMN00I SRDF-HC : (75) #SQ SRDFA,LCL(ACDF,29) EMCQR00I SRDF-HC DISPLAY FOR (75) #SQ SRDFA,LCL(ACDF,29) 504 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 29 Y F 29 000190102000 5772-79 G(R1>R2) SRDFA A MSC RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) ----------------------------------------------------------------------PRIMARY SIDE: CYCLE NUMBER 11,502 MIN CYCLE TIME 30SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 20,183 TRANSMIT CYCLE SIZE 0AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 17,891TIME SINCE LAST CYCLE SWITCH 19 DURATION OF LAST CYCLE 31MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94HA WRITES 239,607,367 RPTD HA WRITES 113,457,324HA DUP. SLOTS 17,085,488 SECONDARY DELAY 50LAST CYCLE SIZE 12,001 DROP PRIORITY 33CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( N ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( Y ) ACTIVE SINCE 09/05/2007 13:03:37CAPTURE TAG C0000000 0000000C TRANSMIT TAG C0000000 0000000BGLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) ----------------------------------------------------------------------END OF DISPLAY

Summary of PEND_DROP recovery processThe recovery process for SRDF/A after a PEND_DROP activity is straightforward and simple. As stated previously, with a PEND_DROP activity, SRDF/A waits to perform the DROP function after a cycle switch is performed. By doing so it leaves no invalid tracks on the R2; thus, recovery is a simple RDF-RSUM, and SRDF/A is reactivated. This procedure avoids a difficult recovery process, and should be used for preparing Disaster Recovery testing or for stopping SRDF/A when an error condition is flagged.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 319: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Basic SRDF/A Operations

Recovery process from a Link Failure (all the links fail)The following recovering process is used by SRDF/A MSC after a Link Failure has occurred. This happens to SRDF/A MSC when there is an Out Of Cache condition or any other SRDF/A failure.

When a total Link Failure occurs with SRDF/A running, the data at the R2 site is consistent and usable for recovery operations. The difference from a PENDDROP is that SRDF/A does not wait for a cycle switch to occur prior to stopping replication in response to an error condition or a manual request for SRDF/A to stop. By not waiting for a cycle switch to complete prior to stopping the SRDF/A replication process, there will be invalid tracks at both the R1 and R2 sites. This does not mean that the data at the R2 site is invalid; it means that SRDF/A will have to use some recovery processing to ensure data consistency. When this occurs, there are special procedures that must be used to get the SRDF/A process up and running again.

Since the links failed on the Symmetrix systems with SRDF/A running, MSC did not have an opportunity to do the necessary cleanup, and therefore, a manual cleanup process must be done prior to restarting SRDF/A.

The recovery procedures are in the following section.

Query the SRDF/A session to verify that it is active and in MSC#SQ SRDFA,LCL(ACDF,29)

EMCMN00I SRDF-HC : (75) #SQ SRDFA,LCL(ACDF,29) EMCQR00I SRDF-HC DISPLAY FOR (75) #SQ SRDFA,LCL(ACDF,29) 504 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 29 Y F 29 000190102000 5772-79 G(R1>R2) SRDFA A MSC RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) ----------------------------------------------------------------------PRIMARY SIDE: CYCLE NUMBER 11,502 MIN CYCLE TIME 30SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 20,183 TRANSMIT CYCLE SIZE 0AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 17,891TIME SINCE LAST CYCLE SWITCH 19 DURATION OF LAST CYCLE 31

Resuming SRDF/A after normal termination or temporary link failure 319

Page 320: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

320

Basic SRDF/A Operations

MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94HA WRITES 239,607,367 RPTD HA WRITES 113,457,324HA DUP. SLOTS 17,085,488 SECONDARY DELAY 50LAST CYCLE SIZE 12,001 DROP PRIORITY 33CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( N ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( Y ) ACTIVE SINCE 09/05/2007 13:03:37CAPTURE TAG C0000000 0000000C TRANSMIT TAG C0000000 0000000BGLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) ----------------------------------------------------------------------END OF DISPLAY

Query the SRDF/A volumes to verify status#SQ VOL,SCFG(GNS1)

EMCMN00I SRDF-HC : (139) #SQ VOL,SCFG(GNS1) EMCQV00I SRDF-HC DISPLAY FOR (139) #SQ VOL,SCFG(GNS1) 216 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SY SYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| % AA00 00 0140 0140 28 EMAA00 3339 OAPV 1 R/W-AS L1 0 0 **AA01 01 0141 0141 28 EMAA01 3339 OAPV 1 R/W-AS L1 0 0 ** AA02 02 0142 0142 28 EMAA02 3339 OAPV 1 R/W-AS L1 0 0 **AA03 03 0143 0143 28 EMAA03 3339 OAPV 1 R/W-AS L1 0 0 **AA04 04 0144 0144 28 EMAA04 3339 OAPV 1 R/W-AS L1 0 0 **AA05 05 0145 0145 28 EMAA05 3339 OAPV 1 R/W-AS L1 0 0 **AA06 06 0146 0146 28 EMAA06 3339 OAPV 1 R/W-AS L1 0 0 **AA07 07 0147 0147 28 EMAA07 3339 OAPV 1 R/W-AS L1 0 0 **END OF DISPLAY EMCQV00I SRDF-HC DISPLAY FOR (139) #SQ VOL,SCFG(GNS1) 217 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SY SYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| % AA08 08 0148 0148 29 EMAA08 3339 OAPV 1 R/W-AS L1 0 0 **AA09 09 0149 0149 29 EMAA09 3339 OAPV 1 R/W-AS L1 0 0 **AA0A 0A 014A 014A 29 EMAA0A 3339 OAPV 1 R/W-AS L1 0 0 **AA0B 0B 014B 014B 29 EMAA0B 3339 OAPV 1 R/W-AS L1 0 0 **AA0C 0C 014C 014C 29 EMAA0C 3339 OAPV 1 R/W-AS L1 0 0 **AA0D 0D 014D 014D 29 EMAA0D 3339 OAPV 1 R/W-AS L1 0 0 **AA0E 0E 014E 014E 29 EMAA0E 3339 OAPV 1 R/W-AS L1 0 0 **AA0F 0F 014F 014F 29 EMAA0F 3339 OAPV 1 R/W-AS L1 0 0 **END OF DISPLAY

Note: The control unit status of R/W-AS states that the volumes are in SRDF/A mode. Everything is normal.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 321: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Basic SRDF/A Operations

All links are lost

Interrogate SCF to determine the status of MSC

The following example illustrates an MSC failure due to a specific all links lost condition. The following text was taken from the SSCF57P joblog using SDSF. RDFGRP 28 has Transmit_Idle and DSE activated.

Using GNS (GNS1), this display includes groups 28 and 29. Both groups are on the same DMX and connected using the same RAs.

11.22.56 S0004361 SCF1463E MSC - GROUP=MSC1 (ACD1,29) SRDFA IS NOT ACTIVE4 11.22.56 S0004361 SCF1586I MSC - GROUP=MSC1 (ACD0,28) SER= 000290100810 IN SRDFA

TRANSMIT IDLE 11.22.56 S0004361 SCF1562I MSC - GROUP=MSC1 (ACD0,28) SER= 000290100810 CYCLE

SWITCH DELAY - TRANSMIT 11.22.56 S0004361 SCF1563I MSC - GROUP=MSC1 (ACD0,28) SER= 000190102000 CYCLE

SWITCH DELAY - RESTORE 11.23.01 S0004361 SCF1586I MSC - GROUP=MSC1 (ACD0,28) SER= 000290100810 IN SRDFA

TRANSMIT IDLE 11.23.01 S0004361 SCF1562I MSC - GROUP=MSC1 (ACD0,28) SER= 000290100810 CYCLE

SWITCH DELAY - TRANSMIT 11.23.01 S0004361 SCF1563I MSC - GROUP=MSC1 (ACD0,28) SER= 000190102000 CYCLE

SWITCH DELAY - RESTORE 11.23.07 S0004361 SCF1586I MSC - GROUP=MSC1 (ACD0,28) SER= 000290100810 IN SRDFA

TRANSMIT IDLE 11.23.07 S0004361 SCF1562I MSC - GROUP=MSC1 (ACD0,28) SER= 000290100810 CYCLE

SWITCH DELAY - TRANSMIT 11.23.07 S0004361 SCF1563I MSC - GROUP=MSC1 (ACD0,28) SER= 000190102000 CYCLE

SWITCH DELAY - RESTORE 11.23.13 S0004361 SCF1586I MSC - GROUP=MSC1 (ACD0,28) SER= 000290100810 IN SRDFA

TRANSMIT IDLE 11.23.13 S0004361 SCF1562I MSC - GROUP=MSC1 (ACD0,28) SER= 000290100810 CYCLE

SWITCH DELAY - TRANSMIT 11.23.13 S0004361 SCF1563I MSC - GROUP=MSC1 (ACD0,28) SER= 000190102000 CYCLE

SWITCH DELAY - RESTORE 11.23.19 S0004361 SCF1586I MSC - GROUP=MSC1 (ACD0,28) SER= 000290100810 IN SRDFA

TRANSMIT IDLE 11.23.19 S0004361 SCF1562I MSC - GROUP=MSC1 (ACD0,28) SER= 000290100810 CYCLE

SWITCH DELAY - TRANSMIT 11.23.19 S0004361 SCF1563I MSC - GROUP=MSC1 (ACD0,28) SER= 000190102000 CYCLE

SWITCH DELAY - RESTORE 11.23.24 S0004361 SCF1586I MSC - GROUP=MSC1 (ACD0,28) SER= 000290100810 IN SRDFA

TRANSMIT IDLE 11.23.24 S0004361 SCF1562I MSC - GROUP=MSC1 (ACD0,28) SER= 000290100810 CYCLE

SWITCH DELAY - TRANSMIT 11.23.24 S0004361 SCF1563I MSC - GROUP=MSC1 (ACD0,28) SER= 000190102000 CYCLE

SWITCH DELAY - RESTORE 11.23.26 S0004361 SCF1405E MSC - GROUP=MSC1 (ACD0,28) HOST CLEANUP INVOKED

All links are lost 321

Page 322: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

322

Basic SRDF/A Operations

11.23.26 S0004361 SCF1560I MSC - GROUP=MSC1 (ACD1,29) GOT THE FOLLOWING ERROR 11.23.26 S0004361 SCF1325E MSC-SAI ERROR FOR VID=REQSRDFA R15=24 EMCRC=12

EMCRS=18 EMCRCX=X'17873206' 11.23.41 S0004361 SCF1560I MSC - GROUP=MSC1 (ACD0,28) GOT

THE FOLLOWING ERROR 11.23.41 S0004361 SCF1325E MSC-SAI ERROR FOR VID=REQSRDFA R15=24 EMCRC=12

EMCRS=18 EMCRCX=X'17870106'

Note: Once MSC recognized that SRDF/A was not active, the Host Cleanup was invoked and all locks were freed from the devices that were in SRDF/A. During the attempted automatic cleanup, MSC could not communicate with the remote Symmetrix systems, so an SAI ERROR was recorded. This was due to the links not being available, and therefore, a manual cleanup was required.

Query the source and target volumes to display status#SQ VOL,SCFG(GNS1)

EMCMN00I SRDF-HC : (139) #SQ VOL,SCFG(GNS1) EMCQV00I SRDF-HC DISPLAY FOR (139) #SQ VOL,SCFG(GNS1) 216 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %AA00 00 0140 0140 28 EMAA00 3339 OAPV 1 LNR-SY L1 0 4,483 91AA01 01 0141 0141 28 EMAA01 3339 OAPV 1 LNR-SY L1 0 4,483 91AA02 02 0142 0142 28 EMAA02 3339 OAPV 1 LNR-SY L1 0 4,483 91AA03 03 0143 0143 28 EMAA03 3339 OAPV 1 LNR-SY L1 0 4,483 91AA04 04 0144 0144 28 EMAA04 3339 OAPV 1 LNR-SY L1 0 4,483 91AA05 05 0145 0145 28 EMAA05 3339 OAPV 1 LNR-SY L1 0 4,483 91AA06 06 0146 0146 28 EMAA06 3339 OAPV 1 LNR-SY L1 0 4,483 91AA07 07 0147 0147 28 EMAA07 3339 OAPV 1 LNR-SY L1 0 4,483 91END OF DISPLAY EMCQV00I SRDF-HC DISPLAY FOR (139) #SQ VOL,SCFG(GNS1) 217 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %AA08 08 0148 0148 29 EMAA08 3339 OAPV 1 LNR-SY L1 0 4,483 91AA09 09 0149 0149 29 EMAA09 3339 OAPV 1 LNR-SY L1 0 4,483 91AA0A 0A 014A 014A 29 EMAA0A 3339 OAPV 1 LNR-SY L1 0 4,483 91AA0B 0B 014B 014B 29 EMAA0B 3339 OAPV 1 LNR-SY L1 0 4,483 91 AA0C 0C 014C 014C 29 EMAA0C 3339 OAPV 1 LNR-SY L1 0 4,483 91 AA0D 0D 014D 014D 29 EMAA0D 3339 OAPV 1 LNR-SY L1 0 4,483 91 AA0E 0E 014E 014E 29 EMAA0E 3339 OAPV 1 LNR-SY L1 0 4,483 91 AA0F 0F 014F 014F 29 EMAA0F 3339 OAPV 1 LNR-SY L1 0 4,483 91 END OF DISPLAY

Note: The volumes are in LNR-SY status and have R2 invalid tracks. The default mode that SRDF/A drops into after a deactivation is Synchronous, and steps need to be taken prior to reactivation. The LNR states that the Links are Not Ready (link failure).

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 323: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Basic SRDF/A Operations

Query SRDF/A to get status at the source site#SQ SRDFA,SCFG(GNS1)

EMCMN00I SRDF-HC : (140) #SQ SRDFA,SCFG(GNS1) EMCQR06E QUERY FOR SRDFA - SRDFA NOT FOUND (CUU:A900) EMCQR06E QUERY FOR SRDFA - SRDFA NOT FOUND (CUU:A900)

Note: This example shows that SRDF/A is down and there is no communication to the target Symmetrix systems. This correlates with the LNR status on the Control unit output from a SQ VOL command. Link recovery process must now take place.

The links are now recovered

The links are now recovered!

The links need to be reestablished, and a cleanup of MSC SRDF/A sessions may be necessary. It is also best practice to create a BCV copy of the R2 after cleanup has been run.

Recovery process to reactivate SRDF/A after a link failureThis process must be done on all SRDF/A sessions in the MSC_Group. This is the output of one of the SRDF/A sessions; the others were done but are not displayed. The following process cannot be executed until the error condition that caused the DROP has been corrected. In this example it was manually issued to simulate a failure.

Query SRDF/A to get status of the target volumes #SQ SRDFA,RMT(ACDF,28)

EMCMN00I SRDF-HC : (141) #SQ SRDFA,RMT(ACDF,28) EMCQR06E QUERY FOR SRDFA - SRDFA NOT FOUND (CUU:ACDF)#SQ SRDFA,RMT(ACDF,29) EMCMN00I SRDF-HC : (142) #SQ SRDFA,RMT(ACDF,29) EMCQR06E QUERY FOR SRDFA - SRDFA NOT FOUND (CUU:ACDF)

Run the Manual Cleanup processThere are three MSC recovery scenarios:

1. All R2 receive cycles have same tag and are completed—MSC action Commit Receive Cycle.

2. All R2 receive cycles have same tag but one or more, but not all are complete—MSC Action Discard Receive Cycle.

All links are lost 323

Page 324: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

324

Basic SRDF/A Operations

3. The apply cycle of one or more of the secondary site sessions is the same as it's associated receive cycle (not all secondary site receive cycles were committed) — MSC Action Commit and Discard Receive Cycle

This process can be run from either the primary or the secondary locations. The only change that needs to be made for each location is the UCB of the SRDF/A volume. If it is run from the primary location, then the primary (R1) volume needs to be specified in the recovery job. If it is run from the secondary location, the secondary (R2) volume must be specified in the recovery job. In this example, the primary site is used to run the job (using gatekeeper device—ACDF).

//JOBNAMEZ JOB CSE,TF51,CLASS=A,MSGCLASS=X,NOTIFY=USERNAME //* //RECOVERY EXEC PGM=SCFRDFME,PARM='Y,ACDF,MSC1 ' //STEPLIB DD DISP=SHR,DSN=ICO.PROD.LINKLIB //RPTOUT DD SYSOUT=* //SCF$V570 DD DUMMY //*

View output of Cleanup JOB SCF1315I MSC MODULE= EHCMSCME VER= V5.5.0 PATCH=BASECOD DEBUG ON MSC_GROUP_NAME=MSC1 CUU = ACDF UCB ADDRESS = 022A8B08 FOUND MSC SCRATCH AREA FOR MSC_GROUP_NAME=MSC1 RDFGRP = 29 VALID RDFGRP FOUND 29 00000002 SRDFA SESSIONS IN MSC 000290100810/28 > 000190102000/28 000290100810/29 > 000190102000/29 OUR SESSION IS RUNNING ON PRIMARY SIDE - 000290100810/29 - 5072 FOUND CU = 000290100810 VALID RDFGRP FOUND 28 SRDFA IS NOT PRESENT ON - PRIMARY SIDE FOUND SESSION = 000290100810/28 CUU= A900 MCLVL = 5072 ------------------------------------------------------------SRDFA IS NOT PRESENT ON SECONDARY SIDE 000290100810/28 MSC IS ACTIVE APPLY CYCLE IS EMPTY RECEIVE TAG = C0000000000008D7 APPLY TAG = C0000000000008D6 ------------------------------------------------------------SRDFA IS NOT PRESENT ON SECONDARY SIDE 000290100810/29 MSC IS ACTIVE APPLY CYCLE IS EMPTY RECEIVE TAG = C0000000000008D7 APPLY TAG = C0000000000008D6

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 325: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Basic SRDF/A Operations

------------------------------------------------------------CASE2 - DISCARD ALL CYCLES -----------------------------------------------------------SRDFA SESSION = 000290100810/28 CLEARED MBLIST SRDFA SESSION = 000290100810/28 CLEARED REMOTE MBLIST SRDFA SESSION = 000290100810/28 CLEARED SCRATCH AREA SRDFA SESSION = 000290100810/28 CLEARED REMOTE SCRATCH AREA-----------------------------------------------------------SRDFA SESSION = 000290100810/29 CLEARED MBLIST SRDFA SESSION = 000290100810/29 CLEARED REMOTE MBLIST SRDFA SESSION = 000290100810/29 CLEARED SCRATCH AREA SRDFA SESSION = 000290100810/29 CLEARED REMOTE SCRATCH AREA-----------------------------------------------------------

Note: Case2 was run for this example. For more detailed information on the MSC cleanup process, refer to the appropriate SRDF product guide.

All links are lost 325

Page 326: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

326

Basic SRDF/A Operations

Perform BCV split of the R2s and BCVs on the R2 sideCreate a gold copy from the R2s on the secondary BCVs. Please refer to “Split off the BCVs to save a gold copy” on page 311 for detailed examples.

GLOBAL WAIT,MAXRC(4) QUERY 01,RMT(ACD4,28),16,156 ESTABLISH 02,RMT(ACD4,156-165,140-14F,28)QUERY 03,RMT(ACD4,28),16,156

Set the SRDF/A volumes to ADCOPY-DISK mode#SC VOL,SCFG(GNS1),ADCOPY-DISK

EMCMN00I SRDF-HC : (145) #SC VOL,SCFG(GNS1),ADCOPY-DISKEMCGM07I COMMAND COMPLETED (CUU:A900) EMCGM07I COMMAND COMPLETED (CUU:A900)Query the volumes to verify status change to ADCOPY

#SQ VOL,SCFG(GNS1)

EMCMN00I SRDF-HC : (146) #SQ VOL,SCFG(GNS1) EMCQV00I SRDF-HC DISPLAY FOR (146) #SQ VOL,SCFG(GNS1) 475 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %AA00 00 0140 0140 28 EMAA00 3339 OAPV 1 TNR-AD L1 0 4,483 91AA01 01 0141 0141 28 EMAA01 3339 OAPV 1 TNR-AD L1 0 4,483 91AA02 02 0142 0142 28 EMAA02 3339 OAPV 1 TNR-AD L1 0 4,483 91AA03 03 0143 0143 28 EMAA03 3339 OAPV 1 TNR-AD L1 0 4,483 91AA04 04 0144 0144 28 EMAA04 3339 OAPV 1 TNR-AD L1 0 4,483 91AA05 05 0145 0145 28 EMAA05 3339 OAPV 1 TNR-AD L1 0 4,483 91AA06 06 0146 0146 28 EMAA06 3339 OAPV 1 TNR-AD L1 0 4,483 91AA07 07 0147 0147 28 EMAA07 3339 OAPV 1 TNR-AD L1 0 4,483 91END OF DISPLAY EMCQV00I SRDF-HC DISPLAY FOR (146) #SQ VOL,SCFG(GNS1) 476 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %AA08 08 0148 0148 29 EMAA08 3339 OAPV 1 TNR-AD L1 0 4,483 91AA09 09 0149 0149 29 EMAA09 3339 OAPV 1 TNR-AD L1 0 4,483 91AA0A 0A 014A 014A 29 EMAA0A 3339 OAPV 1 TNR-AD L1 0 4,483 91AA0B 0B 014B 014B 29 EMAA0B 3339 OAPV 1 TNR-AD L1 0 4,483 91AA0C 0C 014C 014C 29 EMAA0C 3339 OAPV 1 TNR-AD L1 0 4,483 91AA0D 0D 014D 014D 29 EMAA0D 3339 OAPV 1 TNR-AD L1 0 4,483 91AA0E 0E 014E 014E 29 EMAA0E 3339 OAPV 1 TNR-AD L1 0 4,483 91AA0F 0F 014F 014F 29 EMAA0F 3339 OAPV 1 TNR-AD L1 0 4,483 91END OF DISPLAY

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 327: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Basic SRDF/A Operations

Note: The control unit status has changed from ‘TNR-SY’ to ‘TNR-AD’ indicating that the control unit status has changed from synchronous to Adaptive Copy.

Resume SRDF replication processing#SC VOL,SCFG(GNS1),RDF-RSUM

EMCMN00I SRDF-HC : (147) #SC VOL,SCFG(GNS1),RDF-RSUM EMCCVCFE THE FOLLOWING DEVICES REQUIRE SPECIAL PROCESSING BEFORE RESUME 0140-0147 EMCCV25I NO ELIGIBLE DEVICES FOUND, COMMAND ABORTED (CUU:A900) EMCCVCFE THE FOLLOWING DEVICES REQUIRE SPECIAL PROCESSING BEFORE RESUME 0148-014F EMCCV25I NO ELIGIBLE DEVICES FOUND, COMMAND ABORTED (CUU:A900)

Note: Since the SRDF Links failed, and the resulting cleanup discarded the receive delta-set, special processing must be completed for volumes that had updates that were discarded from the SRDF/A recovery process. SRDF/A preserved consistency with the discarding of the tracks since there was not a transfer complete indicator found in the Receive Delta Set. This is normal. The special recovery process for this is a REFRESH and RFR-RSUM process to copy replace the invalid tracks and to make the R1-R2 pair equal. Refer to the appropriate SRDF Host Component guide for more information on REFRESH and RFR-RSUM.

Query remote volumes that require special processing#SQ VOL,RMT(ACDF,28),16,140

EMCMN00I SRDF-HC : (150) #SQ VOL,RMT(ACDF,28),16,140 EMCQV00I SRDF-HC DISPLAY FOR (150) #SQ VOL,RMT(ACDF,28),16,140 778 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %A600 00 0140 0140 28 EMA600 3339 ONPV 0 N/R L2 985 4,483 91A601 01 0141 0141 28 EMA601 3339 ONPV 0 N/R L2 1,110 4,483 91A602 02 0142 0142 28 EMA602 3339 ONPV 0 N/R L2 1,208 4,483 91A603 03 0143 0143 28 EMA603 3339 ONPV 0 N/R L2 1,324 4,483 91A604 04 0144 0144 28 EMA604 3339 ONPV 0 N/R L2 1,021 4,483 91A605 05 0145 0145 28 EMA605 3339 ONPV 0 N/R L2 1,054 4,483 91A606 06 0146 0146 28 EMA606 3339 ONPV 0 N/R L2 1,290 4,483 91A607 07 0147 0147 28 EMA607 3339 ONPV 0 N/R L2 1,549 4,483 91A608 08 0148 0148 29 EMA608 3339 ONPV 0 N/R L2 620 4,483 91A609 09 0149 0149 29 EMA609 3339 ONPV 0 N/R L2 942 4,483 91A60A 0A 014A 014A 29 EMA60A 3339 ONPV 0 N/R L2 1,174 4,483 91A60B 0B 014B 014B 29 EMA60B 3339 ONPV 0 N/R L2 767 4,483 91A60C 0C 014C 014C 29 EMA60C 3339 ONPV 0 N/R L2 782 4,483 91A60D 0D 014D 014D 29 EMA60D 3339 ONPV 0 N/R L2 776 4,483 91A60E 0E 014E 014E 29 EMA60E 3339 ONPV 0 N/R L2 672 4,483 91

Perform BCV split of the R2s and BCVs on the R2 side 327

Page 328: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

328

Basic SRDF/A Operations

A60F 0F 014F 014F 29 EMA60F 3339 ONPV 0 N/R L2 624 4,483 91END OF DISPLAY

Note: The Invalid Tracks in the R1 and R2 fields are the reason for the special processing request.

Perform refresh on SRDFA volumesThis process can be run from either the primary or the secondary locations. The only change that needs to be made for each location is the UCB of the SRDF/A volume. If it is run from the primary location, then the primary (R1) volume needs to be specified in the recovery job. If it is run from the secondary location, the secondary (R2) volume must be specified in the recovery job. In this example, the primary site is used to run the job (using Gatekeeper device—ACDF).

#SC VOL,RMT(ACDF,28),RNG-REFRESH,140-147

EMCMN00I SRDF-HC : (156) #SC VOL,RMT(ACDF,28),RNG-REFRESH,140-147 EMCCV1DI PROCESSING RANGE COMMAND FOR DEVICE 0140 FOR 8 DEVICES (CUU:ACDF)

EMCGM07I COMMAND COMPLETED (CUU:ACDF)

#SC VOL,RMT(ACDF,29),RNG-REFRESH,148-14F

EMCMN00I SRDF-HC : (157) #SC VOL,RMT(ACDF,29),RNG-REFRESH,148-14F EMCCV1DI PROCESSING RANGE COMMAND FOR DEVICE 0148 FOR 8 DEVICES (CUU:ACDF)

EMCGM07I COMMAND COMPLETED (CUU:ACDF)

Query the Remote volumes to validate that the action was performed#SQ VOL,RMT(ACDF,28),16,140

EMCMN00I SRDF-HC : (158) #SQ VOL,RMT(ACDF,28),16,140 EMCQV00I SRDF-HC DISPLAY FOR (158) #SQ VOL,RMT(ACDF,28),16,140 855 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %A600 00 0140 0140 28 EMA600 3339 ONPV 0 N/R -R L2 0 4,483 91A601 01 0141 0141 28 EMA601 3339 ONPV 0 N/R -R L2 0 4,483 91A602 02 0142 0142 28 EMA602 3339 ONPV 0 N/R -R L2 0 4,483 91A603 03 0143 0143 28 EMA603 3339 ONPV 0 N/R -R L2 0 4,483 91A604 04 0144 0144 28 EMA604 3339 ONPV 0 N/R -R L2 0 4,483 91A605 05 0145 0145 28 EMA605 3339 ONPV 0 N/R -R L2 0 4,483 91A606 06 0146 0146 28 EMA606 3339 ONPV 0 N/R -R L2 0 4,483 91A607 07 0147 0147 28 EMA607 3339 ONPV 0 N/R -R L2 0 4,483 91A608 08 0148 0148 29 EMA608 3339 ONPV 0 N/R -R L2 0 4,483 91A609 09 0149 0149 29 EMA609 3339 ONPV 0 N/R -R L2 0 4,483 91A60A 0A 014A 014A 29 EMA60A 3339 ONPV 0 N/R -R L2 0 4,483 91

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 329: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Basic SRDF/A Operations

A60B 0B 014B 014B 29 EMA60B 3339 ONPV 0 N/R -R L2 0 4,483 91A60C 0C 014C 014C 29 EMA60C 3339 ONPV 0 N/R -R L2 0 4,483 91A60D 0D 014D 014D 29 EMA60D 3339 ONPV 0 N/R -R L2 0 4,483 91A60E 0E 014E 014E 29 EMA60E 3339 ONPV 0 N/R -R L2 0 4,483 91A60F 0F 014F 014F 29 EMA60F 3339 ONPV 0 N/R -R L2 0 4,483 91END OF DISPLAY

Note: The R1 field has 0 Invalid Tracks and the Control Unit Status is N/R –R. The –R states that a refresh has been issued to this volume. To clear this field, the RFR-RSUM must be issued to the volume.

Perform a RFR-RSUM #SC VOL,RMT(ACDF,28),RFR-RSUM,140-147

EMCMN00I SRDF-HC : (167) #SC VOL,RMT(ACDF,28),RFR-RSUM,140-147 EMCCVA8I DEVICE 0140 (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 0141 (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 0142 (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 0143 (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 0144 (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 0145 (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 0146 (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 0147 (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCGM07I COMMAND COMPLETED (CUU:ACDF)

#SC VOL,RMT(ACDF,29),RFR-RSUM,148-14F EMCMN00I SRDF-HC : (168) #SC VOL,RMT(ACDF,29),RFR-RSUM,148-14F EMCCVA8I DEVICE 0148 (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 0149 (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 014A (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 014B (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 014C (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 014D (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 014E (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCCVA8I DEVICE 014F (R2), ISSUING RFR-RSUM (CUU:ACDF) EMCGM07I COMMAND COMPLETED (CUU:ACDF)

Note: Once this command completes, the –R flag is reset at the target site and SRDF replication must be restarted.

Query Source Volumes to verify Replication#SQ VOL,SCFG(GNS1)

EMCMN00I SRDF-HC : (178) #SQ VOL,SCFG(GNS1) EMCQV00I SRDF-HC DISPLAY FOR (178) #SQ VOL,SCFG(GNS1) 317 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SY SYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %

Perform BCV split of the R2s and BCVs on the R2 side 329

Page 330: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

330

Basic SRDF/A Operations

AA00 00 0140 0140 28 EMAA00 3339 OAPV 1 R/W-AD L1 0 3,986 92 AA01 01 0141 0141 28 EMAA01 3339 OAPV 1 R/W-AD L1 0 4,031 91 AA02 02 0142 0142 28 EMAA02 3339 OAPV 1 R/W-AD L1 0 4,047 91 AA03 03 0143 0143 28 EMAA03 3339 OAPV 1 R/W-AD L1 0 3,889 92 AA04 04 0144 0144 28 EMAA04 3339 OAPV 1 R/W-AD L1 0 3,988 92 AA05 05 0145 0145 28 EMAA05 3339 OAPV 1 R/W-AD L1 0 3,993 92 AA06 06 0146 0146 28 EMAA06 3339 OAPV 1 R/W-AD L1 0 4,183 91 AA07 07 0147 0147 28 EMAA07 3339 OAPV 1 R/W-AD L1 0 4,070 91 END OF DISPLAY EMCQV00I SRDF-HC DISPLAY FOR (178) #SQ VOL,SCFG(GNS1) 318 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SY SYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| % AA08 08 0148 0148 29 EMAA08 3339 OAPV 1 R/W-AD L1 0 4,119 91 AA09 09 0149 0149 29 EMAA09 3339 OAPV 1 R/W-AD L1 0 4,103 91 AA0A 0A 014A 014A 29 EMAA0A 3339 OAPV 1 R/W-AD L1 0 4,204 91 AA0B 0B 014B 014B 29 EMAA0B 3339 OAPV 1 R/W-AD L1 0 4,056 91 AA0C 0C 014C 014C 29 EMAA0C 3339 OAPV 1 R/W-AD L1 0 4,168 91 AA0D 0D 014D 014D 29 EMAA0D 3339 OAPV 1 R/W-AD L1 0 4,022 91 AA0E 0E 014E 014E 29 EMAA0E 3339 OAPV 1 R/W-AD L1 0 4,137 91 AA0F 0F 014F 014F 29 EMAA0F 3339 OAPV 1 R/W-AD L1 0 4,034 91 END OF DISPLAY

Note: Replication is resumed and all volumes are in R/W-AD mode.

Activate SRDF/A#SC SRDFA,LCL(ACDF,28),ACT

EMCMN00I SRDF-HC : (89) #SC SRDFA,LCL(ACDF,28),ACTEMCGM07I COMMAND COMPLETED (CUU:ACDF)

#SC SRDFA,LCL(ACDF,29),ACT

EMCMN00I SRDF-HC : (89) #SC SRDFA,LCL(ACDF,29),ACTEMCGM07I COMMAND COMPLETED (CUU:ACDF)

Query SRDF/A to validate activation#SQ SRDFA,SCFG(GNS1)

EMCMN00I SRDF-HC : (184) #SQ SRDFA,SCFG(GNS1) EMCGM07I COMMAND COMPLETED (CUU:ACDF)

#SQ SRDFA,LCL(ACDF,28) EMCMN00I SRDF-HC : (185) #SQ SRDFA,LCL(ACDF,28) EMCQR00I SRDF-HC DISPLAY FOR (184) #SQ SRDFA,SCFG(GNS1) 456 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 331: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Basic SRDF/A Operations

MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 28 Y F 28 000190102000 5772-83 G(R1>R2) SRDFA ACTIVE RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO ---------------------------------------------------------------------- PRIMARY SIDE: CYCLE NUMBER 10 MIN CYCLE TIME 30 SECONDARY CONSISTENT ( N ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 17,508 TRANSMIT CYCLE SIZE 0 AVERAGE CYCLE TIME 34 AVERAGE CYCLE SIZE 19,457 TIME SINCE LAST CYCLE SWITCH 22 DURATION OF LAST CYCLE 31 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 HA WRITES 435,439,818 RPTD HA WRITES 205,123,878 HA DUP. SLOTS 35,716,460 SECONDARY DELAY 53 LAST CYCLE SIZE 21,328 DROP PRIORITY 33 CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( Y ) MSC ACTIVE ( N ) ---------------------------------------------------------------------- END OF DISPLAY EMCQR00I SRDF-HC DISPLAY FOR (184) #SQ SRDFA,SCFG(GNS1) 457 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 29 Y F 29 000190102000 5772-83 G(R1>R2) SRDFA ACTIVE RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO ---------------------------------------------------------------------- PRIMARY SIDE: CYCLE NUMBER 1 MIN CYCLE TIME 30 SECONDARY CONSISTENT ( N ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 3,054 TRANSMIT CYCLE SIZE 0 AVERAGE CYCLE TIME 0 AVERAGE CYCLE SIZE 0 TIME SINCE LAST CYCLE SWITCH 2 DURATION OF LAST CYCLE 0 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 HA WRITES 435,439,818 RPTD HA WRITES 205,123,878 HA DUP. SLOTS 35,716,460 SECONDARY DELAY 2 LAST CYCLE SIZE 0 DROP PRIORITY 33 CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( N ) SRDFA DSE ACTIVE ( Y ) MSC ACTIVE ( N ) ---------------------------------------------------------------------- END OF DISPLAY

Perform BCV split of the R2s and BCVs on the R2 side 331

Page 332: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

332

Basic SRDF/A Operations

Note: The secondary Consistency Flag is set to No because SRDF/A was activated prior to all invalid Tracks being sent to the recovery site. It is not required to be in Sync prior to Starting SRDF/A. Once normal SRDF/A processing can be done, this flag will switch to Yes.

Restart MSCF SSCF57P,MSC,RESTART

SCF1390I MSC,RESTART SCF1391I MSC - RESTART COMMAND ACCEPTED. SCF1588I MSC - GROUP=MSC1 (ACD0,28) SER= 0002901008 10 NO

LONGER IN SRDFA TRANSMIT IDLE SCF1321I MSC - TASK DISABLED SCF1320I MSC - TASK ENABLED SCF1568I MSC - GROUP=MSC1 WEIGHT FACTOR = 0SCF1342I MSC - GROUP=MSC1 PROCESS_FC03-ALL BOXES ACTIVE

SCF1523I MSC - GROUP=MSC1 GLOBAL CONSISTENCY HAS BEEN ACHIEVED

SCF1564I MSC - GROUP=MSC1 TIME OF DAY FOR CYCLE 00000001 IS 16:48:50.75

Note: Once MSC is active, look for the FC03 message stating that all Boxes are Active.

Query SRDF/A to verify MSC activation#SQ SRDFA,SCFG(GNS1)

EMCMN00I SRDF-HC : (186) #SQ SRDFA,SCFG(GNS1) EMCQR00I SRDF-HC DISPLAY FOR (186) #SQ SRDFA,SCFG(GNS1) 508 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 28 Y F 28 000190102000 5772-83 G(R1>R2) SRDFA A MSC RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) ---------------------------------------------------------------------- PRIMARY SIDE: CYCLE NUMBER 42 MIN CYCLE TIME 30 SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 20,466 TRANSMIT CYCLE SIZE 0 AVERAGE CYCLE TIME 31 AVERAGE CYCLE SIZE 25,402 TIME SINCE LAST CYCLE SWITCH 24 DURATION OF LAST CYCLE 30 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 333: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Basic SRDF/A Operations

HA WRITES 439,148,305 RPTD HA WRITES 207,054,425HA DUP. SLOTS 35,991,883 SECONDARY DELAY 54LAST CYCLE SIZE 26,383 DROP PRIORITY 33CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( Y ) MSC ACTIVE ( Y ) ACTIVE SINCE 09/07/2007 16:48:44CAPTURE TAG C0000000 00000004 TRANSMIT TAG C0000000 00000003GLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) ----------------------------------------------------------------------END OF DISPLAY EMCQR00I SRDF-HC DISPLAY FOR (186) #SQ SRDFA,SCFG(GNS1) 509 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 29 Y F 29 000190102000 5772-83 G(R1>R2) SRDFA A MSC RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) ---------------------------------------------------------------------- PRIMARY SIDE: CYCLE NUMBER 33 MIN CYCLE TIME 30 SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 23,354 TRANSMIT CYCLE SIZE 0 AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 28,072 TIME SINCE LAST CYCLE SWITCH 24 DURATION OF LAST CYCLE 30 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 HA WRITES 439,148,305 RPTD HA WRITES 207,054,425 HA DUP. SLOTS 35,991,883 SECONDARY DELAY 54 LAST CYCLE SIZE 29,139 DROP PRIORITY 33 CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( N ) SRDFA DSE ACTIVE ( Y ) MSC ACTIVE ( Y ) ACTIVE SINCE 09/07/2007 16:48:44 CAPTURE TAG C0000000 00000004 TRANSMIT TAG C0000000 00000003 GLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) ---------------------------------------------------------------------- END OF DISPLAY

The MSC Flag and the Global Consistency Flag are set to Yes. The feature column also states that SRDF/A is ACTIVE with MSC (SRDFA A MSC).

Perform BCV split of the R2s and BCVs on the R2 side 333

Page 334: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

334

Basic SRDF/A Operations

Reestablish BCVs At this point the BCVs can be reestablished for fast splits if an SRDF/A MSC session experiences any problems. If using TF/Clone, reestablish with pre-copy should not be undertaken. The Clone process should instead be allowed to keep track of the changed tracks for the next time the SRDF/A session fails at which point another Clone can be done using the Differential(YES) parameter.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 335: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

7

This chapter presents these topics:

◆ Restart in the event of disaster or abnormal termination (not a disaster) ................................................................................................ 336

◆ Activation of secondary R2 volumes for production processing. 338◆ Return Home overview...................................................................... 347

SRDF/A and SRDF/AMSC Return Home

Procedures

SRDF/A and SRDF/A MSC Return Home Procedures 335

Page 336: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

336

SRDF/A and SRDF/A MSC Return Home Procedures

Restart in the event of disaster or abnormal termination (not a disaster)

There are essentially two major categories of circumstances that would require restartability or continuity of business processes as facilitated by SRDF. A true, unexpected disaster is probably the most demanding in terms of the resources, processes, and personnel that need to be harnessed for a successful recovery/restart. The other category is that of abnormal termination of the various processes upon which data flow depends. Either category requires that a customer immediately deploy the proper resources and procedures to rectify the situation.

Disaster In the event of a disaster where the primary source Symmetrix is lost, it becomes necessary to run database and application services from the Disaster Restart (DR) site (secondary site). A host at the DR site is required for this.

This host must be able to be IPLed without using any of the devices that are part of the SRDF/A group being replicated to the secondary side, and it must have access to the EMC software used to manipulate the Symmetrix DMX systems being used at the secondary side.

If TRANSMIT_IDLE is active on the secondary side, the SRDF/A sessions must be terminated using the DROP_SIDE command.

If running MSC, the cleanup process must first be run to ensure that there is a consistent copy on the secondary devices (R2s). This copy of the R2s should then be saved to the BCVs as a gold copy to facilitate any subsequent recovery processes.

The first requirement is to write-enable the secondary devices. If the GNS or SRDF/A device group is not yet built on the remote host, it must be created using the secondary devices that were remote mirrors of the primary devices on the primary Symmetrix system.

At this point, the host can issue the necessary commands to access the disks.

Once the data is available to the remote host, the database can be restarted. The database will perform an implicit recovery when activated, or on the first connection if auto restart is enabled. Transactions that were committed, but not completed are rolled forward and completed using the information in the active logs.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 337: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

SRDF/A and SRDF/A MSC Return Home Procedures

Transactions that have updates applied to the database, but were not committed are rolled back. The result is a transactionally consistent database.

Abnormal termination (not a disaster)

An SRDF session can be interrupted because of any situation that prevents the flow of the data from the primary site to the secondary site (for example, a software failure, network failure, or hardware failure).

This would entail correcting the problem, and resuming operations at the primary site as detailed in Chapter 6, or restarting operations on the secondary site as outlined below.

Restart in the event of disaster or abnormal termination (not a disaster) 337

Page 338: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

338

SRDF/A and SRDF/A MSC Return Home Procedures

Activation of secondary R2 volumes for production processingThis process describes how to activate production at the secondary site.

Issue the following queries to determine if any SRDF/A sessions are active; this can occur if Transmit_Idle is ON for any sessions.

#SQ SRDFA,SCFG(GNSR2)

EMCMN00I SRDF-HC : (113) #SQ SRDFA,SCFG(GNSR2) EMCQR00I SRDF-HC DISPLAY FOR (113) #SQ SRDFA,SCFG(GNSR2) 711 MY SERIAL # MY MICROCODE ------------ ------------ 000190102000 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 01 Y F 01 000290100810 5772-83 G(R1>R2) SRDFA ACTIVE GROUP2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO ---------------------------------------------------------------------- PRIMARY SIDE: CYCLE NUMBER 22,038 MIN CYCLE TIME 30 SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 0 TRANSMIT CYCLE SIZE 0 AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 0 TIME SINCE LAST CYCLE SWITCH 22 DURATION OF LAST CYCLE 30 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 HA WRITES 0 RPTD HA WRITES 0 HA DUP. SLOTS 0 SECONDARY DELAY 52 LAST CYCLE SIZE 0 DROP PRIORITY 10 CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( N ) ---------------------------------------------------------------------- 28 N ? ? 000290100810 SRDFA T MSC ---------------------------------------------------------------------- SECONDARY SIDE: CYCLE NUMBER 2,310 CYCLE SUSPENDED ( N ) RESTORE DONE ( Y ) RECEIVE CYCLE SIZE 0 APPLY CYCLE SIZE 0 AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 0 TIME SINCE LAST CYCLE SWITCH 7,094 DURATION OF LAST CYCLE 30 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 TOTAL RESTORES 1,814,815,721 TOTAL MERGES 379,540,319 SECONDARY DELAY 7,124 DROP PRIORITY 33 CLEANUP RUNNING ( N ) HOST INTERVENTION REQUIRED ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N )

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 339: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

SRDF/A and SRDF/A MSC Return Home Procedures

MSC ACTIVE ( Y ) ACTIVE SINCE 09/25/2007 13:26:07 RECEIVE TAG C0000000 000008FA APPLY TAG C0000000 000008F9 GLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) STAR SRDFA AHEAD ( N ) STAR/S TARGET INCONSISTENT ( N ) ---------------------------------------------------------------------- 29 N ? ? 000290100810 SRDFA T MSC ---------------------------------------------------------------------- SECONDARY SIDE: CYCLE NUMBER 2,302 CYCLE SUSPENDED ( N ) RESTORE DONE ( Y ) RECEIVE CYCLE SIZE 62,542 APPLY CYCLE SIZE 0 AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 61,854 TIME SINCE LAST CYCLE SWITCH 7,093 DURATION OF LAST CYCLE 30 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 TOTAL RESTORES 1,814,815,721 TOTAL MERGES 379,540,319 SECONDARY DELAY 7,123 DROP PRIORITY 33 CLEANUP RUNNING ( N ) HOST INTERVENTION REQUIRED ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( Y ) ACTIVE SINCE 09/25/2007 13:26:07 RECEIVE TAG C0000000 000008FA APPLY TAG C0000000 000008F9 GLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) STAR SRDFA AHEAD ( N ) STAR/S TARGET INCONSISTENT ( N ) ---------------------------------------------------------------------- END OF DISPLAY

Note: The results of the query show that both sessions are still active and running, even though the SRDF links are down. This can be seen by the circled text in the display above. The “28” indicates that RDFGRP 28 is active. The “N” indicates that the links are NOT active. SRDF T MSC means that Transmit_Idle is active.

Display the volumes#SQ VOL,SCFG(GNSR2)

EMCMN00I SRDF-HC : (114) #SQ VOL,SCFG(GNSR2) EMCQV00I SRDF-HC DISPLAY FOR (114) #SQ VOL,SCFG(GNSR2) 719 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SYSYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| %A600 00 0140 0140 28 OFLINE 3339 OFFL 0 N/R L2 0 0 **A601 01 0141 0141 28 OFLINE 3339 OFFL 0 N/R L2 0 0 **A602 02 0142 0142 28 OFLINE 3339 OFFL 0 N/R L2 0 0 **A603 03 0143 0143 28 OFLINE 3339 OFFL 0 N/R L2 0 0 **A604 04 0144 0144 28 OFLINE 3339 OFFL 0 N/R L2 0 0 **A605 05 0145 0145 28 OFLINE 3339 OFFL 0 N/R L2 0 0 **A606 06 0146 0146 28 OFLINE 3339 OFFL 0 N/R L2 0 0 **A607 07 0147 0147 28 OFLINE 3339 OFFL 0 N/R L2 0 0 **A608 08 0148 0148 29 OFLINE 3339 OFFL 0 N/R L2 0 0 **A609 09 0149 0149 29 OFLINE 3339 OFFL 0 N/R L2 0 0 **

Activation of secondary R2 volumes for production processing 339

Page 340: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

340

SRDF/A and SRDF/A MSC Return Home Procedures

A60A 0A 014A 014A 29 OFLINE 3339 OFFL 0 N/R L2 0 0 **A60B 0B 014B 014B 29 OFLINE 3339 OFFL 0 N/R L2 0 0 **A60C 0C 014C 014C 29 OFLINE 3339 OFFL 0 N/R L2 0 0 **A60D 0D 014D 014D 29 OFLINE 3339 OFFL 0 N/R L2 0 0 **A60E 0E 014E 014E 29 OFLINE 3339 OFFL 0 N/R L2 0 0 **A60F 0F 014F 014F 29 OFLINE 3339 OFFL 0 N/R L2 0 0 **END OF DISPLAY

Note: The preceding display illustrates that there are no accumulations of R1 or R2 owed tracks. The DROP_SIDE command can be issued to both sessions.

#SC SRDFA,LCL(A8DF,28),DROP_SIDE

EMCMN00I SRDF-HC : (116) #SC SRDFA,LCL(A8DF,28),DROP_SIDE EMCPC08I RAGROUP SPECIFIED DOES NOT EXIST (CUU:A8DF) EMCGM07I COMMAND COMPLETED (CUU:A8DF)

#SC SRDFA,LCL(A8DF,29),DROP_SIDE

EMCMN00I SRDF-HC : (117) #SC SRDFA,LCL(A8DF,29),DROP_SIDE EMCPC08I RAGROUP SPECIFIED DOES NOT EXIST (CUU:A8DF) EMCGM07I COMMAND COMPLETED (CUU:A8DF)

Query the sessions to validate the drop#SQ SRDFA,A8DF

EMCMN00I SRDF-HC : (118) #SQ SRDFA,A8DF EMCQR00I SRDF-HC DISPLAY FOR (118) #SQ SRDFA,A8DF 763 MY SERIAL # MY MICROCODE ------------ ------------ 000190102000 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 01 Y F 01 000290100810 5772-83 G(R1>R2) SRDFA ACTIVE GROUP2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO ---------------------------------------------------------------------- PRIMARY SIDE: CYCLE NUMBER 22,067 MIN CYCLE TIME 30 SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 0 TRANSMIT CYCLE SIZE 0 AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 0 TIME SINCE LAST CYCLE SWITCH 4 DURATION OF LAST CYCLE 30 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 341: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

SRDF/A and SRDF/A MSC Return Home Procedures

HA WRITES 0 RPTD HA WRITES 0 HA DUP. SLOTS 0 SECONDARY DELAY 34 LAST CYCLE SIZE 0 DROP PRIORITY 10 CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( N ) ---------------------------------------------------------------------- 28 N ? ? 000290100810 SRDFA I MSC ---------------------------------------------------------------------- SECONDARY SIDE: CYCLE NUMBER 2,310 CYCLE SUSPENDED ( N ) RESTORE DONE ( Y ) RECEIVE CYCLE SIZE 0 APPLY CYCLE SIZE 0 AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 0 TIME SINCE LAST CYCLE SWITCH 7,962 DURATION OF LAST CYCLE 30 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 TOTAL RESTORES 1,814,815,721 TOTAL MERGES 379,540,319 SECONDARY DELAY NOT ACTIVE DROP PRIORITY 33 CLEANUP RUNNING ( Y ) HOST INTERVENTION REQUIRED ( Y ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( Y ) ACTIVE SINCE 09/25/2007 13:26:07 RECEIVE TAG C0000000 000008FA APPLY TAG C0000000 000008F9 GLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) STAR SRDFA AHEAD ( N ) STAR/S TARGET INCONSISTENT ( N ) ---------------------------------------------------------------------- 29 N ? ? 000290100810 SRDFA I MSC ---------------------------------------------------------------------- SECONDARY SIDE: CYCLE NUMBER 2,302 CYCLE SUSPENDED ( N ) RESTORE DONE ( Y ) RECEIVE CYCLE SIZE 62,542 APPLY CYCLE SIZE 0 AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 61,854 TIME SINCE LAST CYCLE SWITCH 7,962 DURATION OF LAST CYCLE 30 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 TOTAL RESTORES 1,814,815,721 TOTAL MERGES 379,540,319 SECONDARY DELAY NOT ACTIVE DROP PRIORITY 33 CLEANUP RUNNING ( Y ) HOST INTERVENTION REQUIRED ( Y ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( N ) MSC ACTIVE ( Y ) ACTIVE SINCE 09/25/2007 13:26:07 RECEIVE TAG C0000000 000008FA APPLY TAG C0000000 000008F9 GLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) STAR SRDFA AHEAD ( N ) STAR/S TARGET INCONSISTENT ( N ) ---------------------------------------------------------------------- END OF DISPLAY

Note: The SRDF/A I MSC indicates that SRDF/A MSC is inactive, and host intervention is required to make the R2s available to the secondary site. The MSC cleanup routine must be run.

Activation of secondary R2 volumes for production processing 341

Page 342: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

342

SRDF/A and SRDF/A MSC Return Home Procedures

//RECOVERY EXEC PGM=SCFRDFME,PARM='Y,A8DF,MSC1 ' //STEPLIB DD DISP=SHR,DSN=ICO.PROD.LINKLIB //RPTOUT DD SYSOUT=* //SCF$V570 DD DUMMY -----------------------------------------------------------------------SCF1315I MSC MODULE= EHCMSCME VER= V5.5.0 PATCH=BASECOD -----------------------------------------------------------------------DEBUG ON MSC_GROUP_NAME=MSC1 CUU = A8DF UCB ADDRESS = 022876A8 FOUND MSC SCRATCH AREA FOR MSC_GROUP_NAME=MSC1 RDFGRP = 29VALID RDFGRP FOUND 29 00000002 SRDFA SESSIONS IN MSC 000290100810/28 > 000190102000/28 000290100810/29 > 000190102000/29 OUR SESSION IS RUNNING ON SECONDARY SIDE - 000190102000/29 SCF1315I MSC MODULE= EHCMSCME VER= V5.5.0 PATCH=BASECOD DEBUG ON MSC_GROUP_NAME=MSC1 CUU = A8DF UCB ADDRESS = 022876A8 ---------------------------------------------------------------------FOUND MSC SCRATCH AREA FOR MSC_GROUP_NAME=MSC1 RDFGRP = 29 ---------------------------------------------------------------------VALID RDFGRP FOUND 29 ---------------------------------------------------------------------00000002 SRDFA SESSIONS IN MSC ---------------------------------------------------------------------000290100810/28 > 000190102000/28 000290100810/29 > 000190102000/29 ---------------------------------------------------------------------OUR SESSION IS RUNNING ON SECONDARY SIDE - 000190102000/29 ---------------------------------------------------------------------FOUND CU = 000190102000 VALID RDFGRP FOUND 28 FOUND SESSION = 000190102000/28 CUU= A500 RDFGRP = 28 ---------------------------------------------------------------------REMOTE SAI ERROR CUU = A500 RDFGRP = 28 GROUP ERROR DUE TO LINKS UNAVAILABLE ------------------------------------------------------------------------000190102000/28 MSC IS ACTIVE TRANSMIT CYCLE IS EMPTY APPLY CYCLE IS EMPTY HOST INTERVENTION REQUIRED RECEIVE TAG = C0000000000008FA APPLY TAG = C0000000000008F9 REMOTE SAI ERROR CUU = A8DF RDFGRP = 29 GROUP ERROR DUE TO LINKS UNAVAILABLE 000190102000/29 MSC IS ACTIVE TRANSMIT CYCLE IS EMPTY APPLY CYCLE IS EMPTY HOST INTERVENTION REQUIRED RECEIVE TAG = C0000000000008FA APPLY TAG = C0000000000008F9 ------------------------------------------------------------------------------

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 343: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

SRDF/A and SRDF/A MSC Return Home Procedures

CASE1 - COMMIT ALL CYCLES SRDFA SESSION = 000190102000/28 COMMIT RECEIVE CYCLE SRDFA SESSION = 000190102000/29 COMMIT RECEIVE CYCLE -----------------------------------------------------------------------------SRDFA SESSION = 000190102000/28 CLEARED MBLIST REMOTE SAI ERROR CUU = A500 RDFGRP = 28 R15=00000018 EMCRC=000C EMCRS=0043 EMCVID=ACTSRDFA SUB=14 EMCRCX=00000000

Display the R2 volumes#SQ VOL,A600,16

EMCMN00I SRDF-HC : (131) #SQ VOL,A600,16 EMCQV00I SRDF-HC DISPLAY FOR (131) #SQ VOL,A600,16 318 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SY SYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| % A600 00 0140 0140 28 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A601 01 0141 0141 28 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A602 02 0142 0142 28 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A603 03 0143 0143 28 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A604 04 0144 0144 28 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A605 05 0145 0145 28 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A606 06 0146 0146 28 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A607 07 0147 0147 28 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A608 08 0148 0148 29 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A609 09 0149 0149 29 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A60A 0A 014A 014A 29 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A60B 0B 014B 014B 29 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A60C 0C 014C 014C 29 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A60D 0D 014D 014D 29 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A60E 0E 014E 014E 29 OFLINE 3339 OFFL 0 N/R L2 0 0 ** A60F 0F 014F 014F 29 OFLINE 3339 OFFL 0 N/R L2 0 0 ** END OF DISPLAY

Take the SRDF links offlineThe following command should be issued to ensure that there is no activity from the R1 site:

#SC LINK,A8D0,3A,OFFLINE

EMCMN00I SRDF-HC : (137) #SC LINK,A8D0,3A,OFFLINE EMCGM07I COMMAND COMPLETED (CUU:A8D0) #SC LINK,A8D0,39,OFFLINE EMCMN00I SRDF-HC : (138) #SC LINK,A8D0,39,OFFLINE EMCGM07I COMMAND COMPLETED (CUU:A8D0)

Activation of secondary R2 volumes for production processing 343

Page 344: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

344

SRDF/A and SRDF/A MSC Return Home Procedures

Enable secondary (R2) volumes to the operational host#SC VOL,A600,RDY,140-14F

EMCMN00I SRDF-HC : (141) #SC VOL,A600,RDY,140-14F EMCGM07I COMMAND COMPLETED (CUU:A600) #SQ VOL,A600,16 EMCMN00I SRDF-HC : (142) #SQ VOL,A600,16 EMCQV00I SRDF-HC DISPLAY FOR (142) #SQ VOL,A600,16 392 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SY SYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| % A600 00 0140 0140 28 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A601 01 0141 0141 28 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A602 02 0142 0142 28 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A603 03 0143 0143 28 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A604 04 0144 0144 28 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A605 05 0145 0145 28 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A606 06 0146 0146 28 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A607 07 0147 0147 28 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A608 08 0148 0148 29 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A609 09 0149 0149 29 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A60A 0A 014A 014A 29 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A60B 0B 014B 014B 29 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A60C 0C 014C 014C 29 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A60D 0D 014D 014D 29 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A60E 0E 014E 014E 29 OFLINE 3339 OFFL 0 RNR L2 0 0 ** A60F 0F 014F 014F 29 OFLINE 3339 OFFL 0 RNR L2 0 0 ** END OF DISPLAY

Make the secondary (R2) devices R/W #SC VOL,A600,R/W,140-14F

EMCMN00I SRDF-HC : (141) #SC VOL,A600,R/W,140-14F EMCGM07I COMMAND COMPLETED (CUU:A600)

Display the R/W status before varying devices online:

#SQ VOL,A600,16

EMCMN00I SRDF-HC : (171) #SQ VOL,A600,16 EMCQV00I SRDF-HC DISPLAY FOR (171) #SQ VOL,A600,16 626 DV_ADDR| _SYM_ | |TOTAL|SYS |DCB|CNTLUNIT| | R1 | R2 |SY SYS CH|DEV RDEV GP|VOLSER| CYLS|STAT|OPN|STATUS |MR|INVTRK|INVTRK| % A600 00 0140 0140 28 EMAA00 3339 OFFL 0 R/W L2 0 0 ** A601 01 0141 0141 28 EMAA01 3339 OFFL 0 R/W L2 0 0 ** A602 02 0142 0142 28 EMAA02 3339 OFFL 0 R/W L2 0 0 **A603 03 0143 0143 28 EMAA03 3339 OFFL 0 R/W L2 0 0 **A604 04 0144 0144 28 EMAA04 3339 OFFL 0 R/W L2 0 0 **

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 345: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

SRDF/A and SRDF/A MSC Return Home Procedures

A605 05 0145 0145 28 EMAA05 3339 OFFL 0 R/W L2 0 0 **A606 06 0146 0146 28 EMAA06 3339 OFFL 0 R/W L2 0 0 **A607 07 0147 0147 28 EMAA07 3339 OFFL 0 R/W L2 0 0 **A608 08 0148 0148 29 EMAA08 3339 OFFL 0 R/W L2 0 0 ** A609 09 0149 0149 29 EMAA09 3339 OFFL 0 R/W L2 0 0 ** A60A 0A 014A 014A 29 EMAA0A 3339 OFFL 0 R/W L2 0 0 ** A60B 0B 014B 014B 29 EMAA0B 3339 OFFL 0 R/W L2 0 0 ** A60C 0C 014C 014C 29 EMAA0C 3339 OFFL 0 R/W L2 0 0 ** A60D 0D 014D 014D 29 EMAA0D 3339 OFFL 0 R/W L2 0 0 ** A60E 0E 014E 014E 29 EMAA0E 3339 OFFL 0 R/W L2 0 0 ** A60F 0F 014F 014F 29 EMAA0F 3339 OFFL 0 R/W L2 0 0 **END OF DISPLAY

Display the state of volumes

Display the state of volumes before making them available for processing:

#SQ STATE,A600,16

EMCMN00I SRDF-HC : (194) #SQ STATE,A600,16 EMCQV01I SRDF-HC DISPLAY FOR (194) #SQ STATE,A600,16 952 DVA | _SYM_ | |SYS |W S A L R T I D A N| | R1 | R2 | SYSYS |DEV RDEV GP|VOLSER|STAT|R N D N N G T O C /|MR|INVTRK|INVTRK| % | | | |T C C R R T A M T R| | | | A600 0140 0140 28 EMAA00 OFFL W . . L . . . . . . L2 0 0 ** A601 0141 0141 28 EMAA01 OFFL W . . L . . . . . . L2 0 0 **A602 0142 0142 28 EMAA02 OFFL W . . L . . . . . . L2 0 0 ** A603 0143 0143 28 EMAA03 OFFL W . . L . . . . . . L2 0 0 **A604 0144 0144 28 EMAA04 OFFL W . . L . . . . . . L2 0 0 ** A605 0145 0145 28 EMAA05 OFFL W . . L . . . . . . L2 0 0 **A606 0146 0146 28 EMAA06 OFFL W . . L . . . . . . L2 0 0 ** A607 0147 0147 28 EMAA07 OFFL W . . L . . . . . . L2 0 0 ** A608 0148 0148 29 EMAA08 OFFL W . . L . . . . . . L2 0 0 ** A609 0149 0149 29 EMAA09 OFFL W . . L . . . . . . L2 0 0 ** A60A 014A 014A 29 EMAA0A OFFL W . . L . . . . . . L2 0 0 ** A60B 014B 014B 29 EMAA0B OFFL W . . L . . . . . . L2 0 0 ** A60C 014C 014C 29 EMAA0C OFFL W . . L . . . . . . L2 0 0 ** A60D 014D 014D 29 EMAA0D OFFL W . . L . . . . . . L2 0 0 ** A60E 014E 014E 29 EMAA0E OFFL W . . L . . . . . . L2 0 0 ** A60F 014F 014F 29 EMAA0F OFFL W . . L . . . . . . L2 0 0 ** END OF DISPLAY

Note: OFFL under SYS STAT indicates that the devices are offline to z/OS. The W under WRT represents that the devices are R/W enabled. The L under LNR shows that the links are not ready. Please refer to the appropriate Host Component product guide for a complete list of all field values.

Activation of secondary R2 volumes for production processing 345

Page 346: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

346

SRDF/A and SRDF/A MSC Return Home Procedures

Vary devices online Vary devices online and begin production on secondary site:

V A600-A60F,ONLINE

D U,,,A600,00016,L=MPEDER2-Z VARY RANGE DISPLAY IEE457I 15.21.39 UNIT STATUS 955 UNIT TYPE STATUS VOLSER VOLSTATE A600 3390 O EMAA00 PRIV/RSDNT A601 3390 O EMAA01 PRIV/RSDNT A602 3390 O EMAA02 PRIV/RSDNT A603 3390 O EMAA03 PRIV/RSDNT A604 3390 O EMAA04 PRIV/RSDNT A605 3390 O EMAA05 PRIV/RSDNT A606 3390 O EMAA06 PRIV/RSDNT A607 3390 O EMAA07 PRIV/RSDNT A608 3390 O EMAA08 PRIV/RSDNT A609 3390 O EMAA09 PRIV/RSDNT A60A 3390 O EMAA0A PRIV/RSDNT A60B 3390 O EMAA0B PRIV/RSDNT A60C 3390 O EMAA0C PRIV/RSDNT A60D 3390 O EMAA0D PRIV/RSDNT A60E 3390 O EMAA0E PRIV/RSDNT A60F 3390 O EMAA0F PRIV/RSDNT

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 347: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

SRDF/A and SRDF/A MSC Return Home Procedures

Return Home overviewThe Return Home, or Failover process as either name implies, allows the resumption of processing at the original primary production site. Its invocation assumes that an SRDF failover was previously undertaken either as a scheduled process such as a Disaster Recovery/Restart test, or as an unintended occurrence which triggered the failover procedures. Typically, the Return Home process is applicable in any of the following situations:

◆ After completion of a Disaster Recovery/Restart test.

◆ After the problems at the primary site, which triggered the recovery procedures, have been corrected.

The Failback process assumes the following:

◆ Good production data is currently on the secondary (R2) volumes.

◆ The secondary (R2) volumes were replicated using BCV splits into Gold Copies.

◆ The primary (R1) volumes will be over-written with the data from the secondary (R2) volumes.

◆ Production processing will continue while synchronizing the primary (R1) volumes.

Outline of recoveryFollowing a planned or unplanned outage of a significant length of time, the number of invalid tracks accumulated on the secondary (R2) devices can be quite large and may elongate the R2 to R1 synchronization time (the Return Home time). In this type of situation there is an option that allows the R2 host to continue processing while the R1 devices are returned to a near-synchronous state. When this near-synchronous state is reached with the primary (R1s), the applications can be stopped at the secondary (R2) host, and the balance of the data owed to the R1 from the R2 can be resynchronized. Processing can then resume on the primary (R1) host. The steps for recovery are:

1. Restore the major portion of the primary site data using the Pre-Refresh SRDF Pair process.

Return Home overview 347

Page 348: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

348

SRDF/A and SRDF/A MSC Return Home Procedures

2. After the Pre-Refresh cycles are completed, the applications are stopped at the secondary R2 site.

3. Reestablish and Split Gold Copy BCVs at the secondary R2 site.

4. The balance of the source data is quickly restored using the Refresh and RFR-RSUM SRDF Pair process.

5. After the refresh is complete, return the R1 devices to primary status by stopping the SRDF session and restoring the synchronization direction from R1<R2 to R1>R2.

6. Activate SRDF/A and MSC.

7. Set Synchronization Direction Allowed parameters back to R1>R2 only.

Pre-refresh SRDF pairThe following is a general procedure to pre-refresh an R1 volume from an R2. The terms, source and target, reflect the normal or production designations. The pre-refresh process is used in cases where it is desirable to maintain production on the recovery system (secondary). All commands assume that the Global SRDFINI files allow R1<>R2 settings, so that an individual RDFgroup can have its synch direction changed without changing the SRDF parm.

At the Primary (R1) site, if the primary (R1) volumes have not already been made “not ready” to the host, execute the following commands:

Vary the primary (R1) volumes offline to the z/OS systemsV AA00-AA0F,OFFLINE

D U,,,AA00,00016,L=MPEDER1-Z VARY RANGE DISPLAY IEE457I 10.35.59 UNIT STATUS 249 UNIT TYPE STATUS VOLSER VOLSTATE AA00 3390 F-NRD /RSDNT AA01 3390 F-NRD /RSDNT AA02 3390 F-NRD /RSDNT AA03 3390 F-NRD /RSDNT AA04 3390 F-NRD /RSDNT AA05 3390 F-NRD /RSDNT AA06 3390 F-NRD /RSDNT AA07 3390 F-NRD /RSDNT AA08 3390 F-NRD /RSDNT AA09 3390 F-NRD /RSDNT AA0A 3390 F-NRD /RSDNT

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 349: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

SRDF/A and SRDF/A MSC Return Home Procedures

AA0B 3390 F-NRD /RSDNT AA0C 3390 F-NRD /RSDNT AA0D 3390 F-NRD /RSDNT AA0E 3390 F-NRD /RSDNT AA0F 3390 F-NRD /RSDNT

Display the current settings

Display the current settings for the RDFGRPs 28 and 29 from the primary site:

#SQ RDFGRP,RMT(A8DF,28),RA(28)

EMCMN00I SRDF-HC : (34) #SQ RDFGRP,RMT(A8DF,28),RA(28) EMCQR00I SRDF-HC DISPLAY FOR (34) #SQ RDFGRP,RMT(A8DF,28),RA(28) 574 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ----------28 Y F 28 000190102000 5772-83 G(R1>R2) RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO MY DIR# OS RA# ST -----MY WWN----- ----IN COUNT---- ---OUT COUNT---- ------- ------ -- ---------------- ---------------- ---------------- 39 38 02 5006048C52A59298 00000000030E56B8 000001304A705630 39 02 0000000004951BA0 00000133F06683F0 0000000007A37258 000002643AD6DA20 3A 38 02 5006048C52A59299 00000000045A3588 000001445C93A030 39 02 0000000004128198 00000147C304D550 00000000086CB720 0000028C1F987580

#SQ RDFGRP,RMT(A8DF,29),RA(29) EMCMN00I SRDF-HC : (35) #SQ RDFGRP,RMT(A8DF,29),RA(29) EMCQR00I SRDF-HC DISPLAY FOR (35) #SQ RDFGRP,RMT(A8DF,29),RA(29) 579 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 29 Y F 29 000190102000 5772-83 G(R1>R2) RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO

Return Home overview 349

Page 350: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

350

SRDF/A and SRDF/A MSC Return Home Procedures

MY DIR# OS RA# ST -----MY WWN----- ----IN COUNT---- ---OUT COUNT---- ------- ------ -- ---------------- ---------------- ---------------- 39 38 02 5006048C52A59298 00000000448DE9F0 0000063FCCC59BF8 39 02 000000007BA60598 00000650C2F70980 00000000C033EF88 00000C908FBCA578 3A 38 02 5006048C52A59299 00000000612D2060 00000665281BA808 39 02 000000005F7B2220 00000674450306E0 00000000C0A84280 00000CD96D1EAEE8

Make the primary (R1s) unavailable

#SC VOL,RMT(A8DF,28),RDF-NRDY,140-147

EMCMN00I SRDF-HC : (28) #SC VOL,RMT(A8DF,28),RDF-NRDY,140-147 EMCGM07I COMMAND COMPLETED (CUU:A8DF)

#SC VOL,RMT(A8DF,29),RDF-NRDY,148-14F

EMCMN00I SRDF-HC : (29) #SC VOL,RMT(A8DF,29),RDF-NRDY,148-14F EMCGM07I COMMAND COMPLETED (CUU:A8DF)

Set the SYNCH_DIRECTION at the primary host

At the Primary (R1) site, manually update RDF parameters to SYNCH_DIRECTION_ALLOWED=R1<>R2,

#SC GLOBAL,PARM_REFRESH

EMCMN00I SRDF-HC : (82) #SC GLOBAL,PARM_REFRESH EMCPS03I REFRESH COMPLETE, STATISTICS FOR ADDED DEVICES FOLLOW EMCPS00I SSID(S): 31 TOTAL DEV(S): 5,478 SUPPORTED DEV(S): 37

#SC RDFGRP,ACDF,29,SYNCH_DIRECTION,R1<R2

EMCMN00I SRDF-HC : (36) #SC RDFGRP,ACDF,29,SYNCH_DIRECTION,R1<R2 EMCGM07I COMMAND COMPLETED (CUU:ACDF)

Set the SYNCH_DIRECTION at the secondary hostAt the secondary (R2) site, set the synch direction for both local and remote: Optionally set the synchronization direction at the secondary host using the following commands:

Manually update RDF parameters to SYNCH_DIRECTION_ALLOWED=R1<>R2.

#SC GLOBAL,PARM_REFRESH

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 351: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

SRDF/A and SRDF/A MSC Return Home Procedures

EMCMN00I SRDF-HC : (82) #SC GLOBAL,PARM_REFRESH EMCPS03I REFRESH COMPLETE, STATISTICS FOR ADDED DEVICES FOLLOW EMCPS00I SSID(S): 31 TOTAL DEV(S): 5,478 SUPPORTED DEV(S): 37

Set synchronization direction from R2 volumes to the R1 volumes.

#SC RDFGRP,A8DF,29,SYNCH_DIRECTION,R1<R2

EMCMN00I SRDF-HC : (36) #SC RDFGRP,A8DF,29,SYNCH_DIRECTION,R1<R2 EMCGM07I COMMAND COMPLETED (CUU:A8DF)

#SQ RDFGRP,A8DF,RA(29)

EMCQR00I SRDF-HC DISPLAY FOR (37) #SQ RDFGRP,A8DF,RA(29) 768 MY SERIAL # MY MICROCODE ------------ ------------ 000190102000 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ----------

------- ---------------------- ---------------- ---------- 29 Y F 29 000290100810 5772-83 R1<R2 RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO MY DIR# OS RA# ST -----MY WWN----- ----IN COUNT---- ---OUT COUNT---- 39 39 02 5006048AD52E7C18 0000066528316AF8 0000000061175F28 38 02 0000063FCCD2E388 0000000044816140 00000CA4F5044E80 00000000A598C068 3A 39 02 5006048AD52E7C19 0000067445168918 000000005F6D5B00 38 02 00000650C3148238 000000007B93B828 00000CC5082B0B50 00000000DB011328 END OF DISPLAY

Mark R1 volumes to be updated from secondary R2 volumes using the following commands:

#SC VOL,RMT(A8DF,28),RNG-PREFRESH,140-147

EMCMN00I SRDF-HC : (64) #SC VOL,RMT(A8DF,28),RNG-PREFRESH,140-147 EMCCV1DI PROCESSING RANGE COMMAND FOR DEVICE 0140 FOR 8 DEVICES(CUU:A8DF)

#SC VOL,RMT(A8DF,29),RNG-PREFRESH,148-14F

EMCMN00I SRDF-HC : (64) #SC VOL,RMT(A8DF,29),RNG-PREFRESH,148-14F EMCCV1DI PROCESSING RANGE COMMAND FOR DEVICE 0148 FOR 8 DEVICES (CUU:A8DF)

Start the refresh operation using the following commands:#SC VOL,RMT(A8DF,28),RNG-PRE-RSUM,140-147

EMCMN00I SRDF-HC : (67) #SC VOL,RMT(A8DF,28),RNG-PRE-RSUM,140-147

Return Home overview 351

Page 352: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

352

SRDF/A and SRDF/A MSC Return Home Procedures

EMCCV1DI PROCESSING RANGE COMMAND FOR DEVICE 0140 FOR 8 DEVICES (CUU:A8DF)

#SC VOL,RMT(A8DF,29),RNG-PRE-RSUM,148-14F

EMCMN00I SRDF-HC : (65) #SC VOL,RMT(A8DF,29),RNG-PRE-RSUM,148-14F EMCCV1DI PROCESSING RANGE COMMAND FOR DEVICE 0148 FOR 8 DEVICES (CUU:A8DF)

EMCGM07I COMMAND COMPLETED (CUU:A8DF)

Monitor the process for completion using the invalid track status commands while looking for the number of tracks owed to the R1 volume.

#SQ VOL,RMT(A8DF,28),INV_TRKS,8

EMCMN00I SRDF-HC : (69) #SQ VOL,RMT(A8DF,28),INV_TRKS,8 EMCQV12I NO DEVICES FOUND WITH INVALID TRACKS (CUU:A8DF)

#SQ VOL,RMT(A8DF,29),INV_TRKS,8

EMCMN00I SRDF-HC : (70) #SQ VOL,RMT(A8DF,29),INV_TRKS,8 EMCQV12I NO DEVICES FOUND WITH INVALID TRACKS (CUU:A8DF)

Stop application processing at the secondary (R2) site.

Reestablish and split Gold Copy BCVs at the R2 Target siteCreate a BCV Gold Copy of the secondary (R2) volumes before continuing with the recovery.

At the secondary (R2) site:

◆ Run the jobs to reestablish the BCVs with the R2 volumes in each secondary Symmetrix system.

◆ Run the jobs to split the BCVs from the R2 volumes in each secondary Symmetrix system.

Refresh and RFR-RSUM SRDF pair

The following will finish synchronization of the R1 volumes from the R2 volumes.

At the Primary (R1) site, mark the R1 volumes to be updated from the R2 volumes using the following commands:

#SC VOL,SCFG(GNS1),RNG-PREFRESH,140-147

EMCMN00I SRDF-HC : (64) #SC VOL,SCFG(GNS1),RNG-PREFRESH,140-147 EMCCV1DI PROCESSING RANGE COMMAND FOR DEVICE 0140 FOR 8 DEVICES (CUU:ACDF)

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 353: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

SRDF/A and SRDF/A MSC Return Home Procedures

#SC VOL, SCFG(GNS1),RNG-PREFRESH,148-14F

EMCMN00I SRDF-HC : (64) #SC VOL, SCFG(GNS1),RNG-PREFRESH,148-14F EMCCV1DI PROCESSING RANGE COMMAND FOR DEVICE 0148 FOR 8 DEVICES (CUU:ACDF)

Start the refresh operation using the following commands:

#SC VOL,SCFG(GNS1),RFR-RSUM,140-147

EMCMN00I SRDF-HC : (65) #SC VOL,SCFG(GNS1),RFR-RSUM,140-147 EMCCV1DI PROCESSING RANGE COMMAND FOR DEVICE 0140 FOR 8 DEVICES (CUU:ACDF)

#SC VOL, SCFG(GNS1),RFR-RSUM,148-14F

EMCMN00I SRDF-HC : (65) #SC VOL, SCFG(GNS1),RFR-RSUM,148-14F EMCCV1DI PROCESSING RANGE COMMAND FOR DEVICE 0148 FOR 8 DEVICES (CUU:ACDF)

At the Primary (R1) site, monitor the process for completion using the invalid track status command looking for number of tracks owed to the R1 volume. Do not proceed to the next step until outstanding tracks are zero.

#SQ VOL,SCFG(GNS1),INV_TRKS

EMCMN00I SRDF-HC : (63) #SQ VOL,SCFG(GNS1),INV_TRKS EMCQV00I SRDF-HC DISPLAY FOR (63) #SQ VOL,SCFG(GNS1),INV_TRKS 015 EMCQV12I NO DEVICES FOUND WITH INVALID TRACKS (CUU:ACDF)

EMCQV00I SRDF-HC DISPLAY FOR (63) #SQ VOL,SCFG(GNS1),INV_TRKS 016EMCQV12I NO DEVICES FOUND WITH INVALID TRACKS (CUU:ACDF)

Reestablish and split Gold Copy BCVs at the R2 Target siteCreate a BCV Gold Copy of the secondary (R2) volumes before continuing with the recovery.

At the secondary (R2) site, run the jobs to reestablish the BCVs with the R2 volumes in each secondary Symmetrix system. Run the jobs to Split the BCVs from the R2 volumes in each secondary Symmetrix system.

Stop SRDF and set SYNCH_DIRECTION to R1>R2Stop SRDF replication and change synchronization direction from R1<R2 to R1>R2.

Return Home overview 353

Page 354: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

354

SRDF/A and SRDF/A MSC Return Home Procedures

At the secondary (R2) site, make the R2 devices NOT READY to the secondary host using the following commands:

#SC VOL,RMT(ACDF,29),RDF-NRDY,148-14F

EMCMN00I SRDF-HC : (636) #SC VOL,RMT(ACDF,29),RDF-NRDY,148-14F

#SC VOL,RMT(ACDF,28),RDF-NRDY,140-147

EMCMN00I SRDF-HC : (637) EMCGM07I COMMAND COMPLETED (CUU:ACDF)

At the primary (R1) site, make the R1 devices READY using the following commands:

#SC VOL,LCL(ACDF,28),RDF-RDY,140-147

EMCMN00I SRDF-HC : (638) #SC VOL,LCL(ACDF,28),RDF-RDY,140-147 EMCGM07I COMMAND COMPLETED (CUU:ACDF)

#SC VOL,LCL(ACDF,29),RDF-RDY,148-14F EMCMN00I SRDF-HC : (639) #SC VOL,LCL(ACDF,29),RDF-RDY,148-14F EMCGM07I COMMAND COMPLETED (CUU:ACDF)

At the primary (R1) site, make the R1 devices READ/WRITE enabled using the following commands:

#SC VOL,LCL(ACDF,28),R/W,140-147

EMCMN00I SRDF-HC : (638) #SC VOL,LCL(ACDF,28),R/W,140-147 EMCGM07I COMMAND COMPLETED (CUU:ACDF)

#SC VOL,LCL(ACDF,29),R/W,148-14F EMCMN00I SRDF-HC : (639) #SC VOL,LCL(ACDF,29),R/W,148-14F EMCGM07I COMMAND COMPLETED (CUU:ACDF)

At the Primary (R1) site, set the synchronization direction at the R1 host using the following commands:

#SC CNFG,ACDF,SYNCH_DIRECTION,R1>R2

At the secondary (R2) site, set the synchronization direction at the R2 host using the following commands:

#SC CNFG,A8DF,SYNCH_DIRECTION,R1>R2

At the Primary (R1) site, IPL the LPARs.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 355: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

SRDF/A and SRDF/A MSC Return Home Procedures

Begin SRDF/A and MSC processingSince the R1 and R2 devices have just been synchronized, SRDF/A and MSC should be activated at the Primary site.

At the Primary (R1) site, activate SRDF/A in each session using the following commands:

#SC SRDFA,LCL(ACDF,28),ACT

EMCMN00I SRDF-HC : (89) #SC SRDFA,LCL(ACDF,28),ACTEMCGM07I COMMAND COMPLETED (CUU:ACDF)

#SC SRDFA,LCL(ACDF,29),ACT

EMCMN00I SRDF-HC : (90) #SC SRDFA,LCL(ACDF,29),ACTEMCGM07I COMMAND COMPLETED (CUU:ACDF) At the Primary (R1) site, run SRDF/A queries to ensure secondary Consistency = Y.#SQ SRDFA,SCFG(GNS1) EMCMN00I SRDF-HC : (91) #SQ SRDFA,SCFG(GNS1) EMCQR00I SRDF-HC DISPLAY FOR (91) #SQ SRDFA,SCFG(GNS1) 937 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 28 Y F 28 000190102000 5772-79 G(R1>R2) SRDFA ACTIVE RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) ---------------------------------------------------------------------- PRIMARY SIDE: CYCLE NUMBER 7 MIN CYCLE TIME 30 SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 2,199 TRANSMIT CYCLE SIZE 0 AVERAGE CYCLE TIME 36 AVERAGE CYCLE SIZE 18,589 TIME SINCE LAST CYCLE SWITCH 8 DURATION OF LAST CYCLE 30 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 HA WRITES 240,131,947 RPTD HA WRITES 113,683,730 HA DUP. SLOTS 17,117,606 SECONDARY DELAY 38 LAST CYCLE SIZE 11,411 DROP PRIORITY 33 CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( N ) SRDFA DSE ACTIVE ( Y ) MSC ACTIVE ( N ) ---------------------------------------------------------------------- END OF DISPLAY EMCQR00I SRDF-HC DISPLAY FOR (91) #SQ SRDFA,SCFG(GNS1) 938 MY SERIAL # MY MICROCODE ------------ ------------

000290100810 5772-83

Return Home overview 355

Page 356: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

356

SRDF/A and SRDF/A MSC Return Home Procedures

MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 29 Y F 29 000190102000 5772-79 G(R1>R2) SRDFA ACTIVE RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) ---------------------------------------------------------------------- PRIMARY SIDE: CYCLE NUMBER 7 MIN CYCLE TIME 30 SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 87 TRANSMIT CYCLE SIZE 9,342 AVERAGE CYCLE TIME 36 AVERAGE CYCLE SIZE 19,751 TIME SINCE LAST CYCLE SWITCH 0 DURATION OF LAST CYCLE 30 MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94 HA WRITES 240,131,947 RPTD HA WRITES 113,683,730 HA DUP. SLOTS 17,117,606 SECONDARY DELAY 30 LAST CYCLE SIZE 10,407 DROP PRIORITY 33 CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( N ) SRDFA DSE ACTIVE ( Y ) MSC ACTIVE ( N ) END OF DISPLAY

At the Primary (R1) site, with secondary Consistency in all sessions, start MSC:

F SSCF57P,MSC,REFRESH

SCF1390I MSC - REFRESH SCF1391I MSC - REFRESH COMMAND ACCEPTED. SCF1557I MSC - GROUP=MSC1 WAITING FORINITIALIZATION/TERMINATION ENQUE

SCF1321I MSC - TASK DISABLEDSCF1320I MSC - TASK ENABLED

#SC GLOBAL,PARM_REFRESH

EMCMN00I SRDF-HC : (66) #SC GLOBAL,PARM_REFRESH EMCPS03I REFRESH COMPLETE, STATISTICS FOR ADDED DEVICES FOLLOW EMCPS00I SSID(S): 31 TOTAL DEV(S): 5,478 SUPPORTED DEV(S): 37

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 357: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

SRDF/A and SRDF/A MSC Return Home Procedures

At the Primary (R1) site, run SRDF/A queries to ensure Global Consistency ( Y ).

#SQ SRDFA,SCFG(GNS1)

EMCMN00I SRDF-HC : (95) #SQ SRDFA,SCFG(GNS1) EMCQR00I SRDF-HC DISPLAY FOR (95) #SQ SRDFA,SCFG(GNS1) 247 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83

MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ----------28 Y F 28 000190102000 5772-79 G(R1>R2) SRDFA A MSC RDFG1 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 )

---------------------------------------------------------------------PRIMARY SIDE: CYCLE NUMBER 59 MIN CYCLE TIME 3SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 8,705 TRANSMIT CYCLE SIZE AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 17,51TIME SINCE LAST CYCLE SWITCH 23 DURATION OF LAST CYCLE 3MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94HA WRITES 243,017,585 RPTD HA WRITES 114,756,681HA DUP. SLOTS 17,313,703 SECONDARY DELAY 54LAST CYCLE SIZE 11,714 DROP PRIORITY 33CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( Y ) SRDFA DSE ACTIVE ( Y ) MSC ACTIVE ( Y ) ACTIVE SINCE 09/05/2007 16:28:00CAPTURE TAG C0000000 00000029 TRANSMIT TAG C0000000 00000028GLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) ----------------------------------------------------------------------END OF DISPLAY EMCQR00I SRDF-HC DISPLAY FOR (95) #SQ SRDFA,SCFG(GNS1) 248 MY SERIAL # MY MICROCODE ------------ ------------ 000290100810 5772-83 MY GRP ONL PC OS GRP OS SERIAL OS MICROCODE SYNCHDIR FEATURE ------ --- -- ------ ------------ ------------ -------- ------------ LABEL TYPE AUTO-LINKS-RECOVERY LINKS_DOMINO MSC_GROUP ---------- ------- ---------------------- ---------------- ---------- 29 Y F 29 000190102000 5772-79 G(R1>R2) SRDFA A MSC RDFG2 DYNAMIC NO-AUTO-LINKS-RECOVERY LINKS-DOMINO:NO (MSC1 ) ----------------------------------------------------------------------PRIMARY SIDE: CYCLE NUMBER 58 MIN CYCLE TIME 30SECONDARY CONSISTENT ( Y ) TOLERANCE ( N ) CAPTURE CYCLE SIZE 9,287 TRANSMIT CYCLE SIZE 0AVERAGE CYCLE TIME 30 AVERAGE CYCLE SIZE 18,635

Return Home overview 357

Page 358: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

358

SRDF/A and SRDF/A MSC Return Home Procedures

TIME SINCE LAST CYCLE SWITCH 23 DURATION OF LAST CYCLE 31MAX THROTTLE TIME 0 MAX CACHE PERCENTAGE 94HA WRITES 243,017,585 RPTD HA WRITES 114,756,681HA DUP. SLOTS 17,313,703 SECONDARY DELAY 54LAST CYCLE SIZE 12,918 DROP PRIORITY 33CLEANUP RUNNING ( N ) MSC WINDOW IS OPEN ( N ) SRDFA TRANSMIT IDLE ( N ) SRDFA DSE ACTIVE ( Y ) MSC ACTIVE ( Y ) ACTIVE SINCE 09/05/2007 16:28:00CAPTURE TAG C0000000 00000029 TRANSMIT TAG C0000000 00000028GLOBAL CONSISTENCY ( Y ) STAR RECOVERY AVAILABLE ( N ) ----------------------------------------------------------------------

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 359: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

This glossary contains terms and concepts related to disk storage subsystems. Many of these terms and concepts are used in this manual.

Aabend Termination of a task before its completion because of an error

condition that cannot be resolved by recovery facilities while the task is executing.

access A specific type of interaction between a subject and an object that results in the flow of information from one to the other.

access authority Authority that relates to a request for a type of access to protected resources. In RACF, the access authorities are NONE, READ, UPDATE, ALTER, and EXEC+++UTE.

access list A list within a profile of all authorized users and their access authorities.

access method A technique for moving data between main storage and input/output devices.

ACID properties The properties of a transaction: atomicity, consistency, isolation, and durability. In CICS, the ACID properties apply to a unit of work (UOW).

actuator A set of access arms and their attached read/write heads, which move as an independent component within a head and disk assembly (HDA).

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 359

Page 360: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

360

Glossary

adapter Card that provides the physical interface between the director and disk devices (SCSI adapter), director and parallel channels (Bus & Tag adapter), director and serial channels (Serial adapter).

adapter card Provides the physical interface between the director and disk devices (SCSI adapter), director and parallel channels (Bus & Tag adapter), director and serial channels (Serial adapter).

adaptive copy diskmode

Symmetrix SRDF Adaptive copy disk mode is similar to adaptive copy write pending mode, except that write tasks accumulate on the primary volume rather than in global memory. A background process destages the write tasks to the corresponding secondary volume. When the skew value is reached, the primary volume reverts to its primary mode of operation, either synchronous or semi-synchronous, whichever is currently specified. See also ”adaptive copy mode,” and “adaptive copy write pending mode.”

adaptive copy mode Symmetrix SRDF Adaptive copy modes facilitate data sharing and migration. These modes allow the primary and secondary volumes to be more than one I/O out of synchronization. The maximum number of I/Os that can be out of synchronization is known as the maximum skew value. The default value is equal to the entire logical volume. The maximum skew value for a volume can be set using the SRDF monitoring and control software.

adaptive copy writepending mode

With Symmetrix SRDF adaptive copy write pending mode, write tasks accumulate in global memory. A background process moves, or destages, the write-pending tasks to the primary volume and its corresponding secondary volume on the other side of the SRDF link. When the maximum skew value is reached, the primary volume reverts to its primary mode of operation, either synchronous or semi-synchronous, whichever is currently specified. The device remains in the primary mode until the number of tracks to remotely copy becomes less than the maximum skew value. This mode is not supported for FICON or Enginuity 5772 and higher.

address space The area of virtual storage available for a particular job. In z/OS, an address space can range up to 16 Exabytes of contiguous virtual storage addresses that the system creates for the user. An address space contains user data and programs, as well as system data and programs, some of which are common to all address spaces.

address The unique code assigned to each device, workstation or system connected to a network.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 361: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

addressing mode(AMODE)

The mode, 24-bit, 31-bit, or 64-bit, in which a program holds and processes addresses on z/OS.

administrator A person responsible for administrative tasks such as access authorization and content management. Administrators can also grant levels of authority to users.

agent A software entity that runs on endpoints and provides management capability for other hardware or software. An example is an SNMP agent. An agent has the ability to spawn other processes.

AL See also ”arbitrated loop.”

allocate To assign a resource for use in performing a specific task.

allocated storage The space that is allocated to volumes, but not assigned.

allocation The entire process of obtaining a volume and unit of external storage, and setting aside space on that storage for a dataset.

alphanumericcharacter

A letter or a number.

amode Addressing mode. A program attribute that can be specified (or defaulted) for each CSECT, load module, and load module alias. AMODE states the addressing mode that is expected to be in effect when the program is entered.

alternate track A track designated to contain data in place of a defective primary track. See also ”primary track.”

application (1) A particular use to which an information processing system is put—for example, a stock control application, an airline reservation application, and an order entry application. (2) A program or set of programs that performs a task; some examples are payroll, inventory management, and word processing applications.

arbitrated loop A Fibre Channel interconnection technology that allows up to 126 participating node ports and one participating fabric port to communicate. See also ”Fibre Channel arbitrated loop,” and “loop topology.”

array An arrangement of related disk drive modules that have been assigned to a group.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 361

Page 362: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

362

Glossary

ASCII (AmericanStandard Code for

InformationInterchange)

The standard code, using a coded character set consisting of 7-bit coded characters (8-bit including parity check), that is used for information interchange among data processing systems, data communication systems, and associated equipment. The ASCII set consists of control characters and graphic characters.

assembler language A symbolic programming language that comprises instructions for basic computer operations which are structured according to the data formats, storage structures, and registers of the computer.

assembler A computer program that converts assembler language instructions into object code.

asynchronousprocessing

A series of operations that are done separately from the job in which they were requested; for example, submitting a batch job from an interactive job at a work station.

audit To review and examine the activities of a data processing system mainly to test the adequacy and effectiveness of procedures for data security and data accuracy.

authority The right to access objects, resources, or functions.

authorizationchecking

The action of determining whether a user is permitted access to a RACF-protected resource.

authorized programfacility (APF)

A facility that permits identification of programs authorized to use restricted functions.

automated operations Automated procedures to replace or simplify actions of operators in both systems and network operations.

auxiliary storage All addressable storage other than processor storage. See also ”virtual storage.”

Bbackup A copy of computer data that is used to re-create data that has been

lost, mislaid, corrupted, or erased. The act of creating a copy of computer data that can be used to re-create data that has been lost, mislaid, corrupted or erased.

backup The process of creating a copy of data to ensure against accidental loss.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 363: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

bandwidth A measure of the data transfer rate of a transmission channel.

batch A group of records or data processing jobs brought together for processing or transmission.

batch job A predefined group of processing actions submitted to the system to be performed with little or no interaction between the user and the system. See also ”virtual storage.”

batch messageprocessing (BMP)

program

An IMS batch processing program that has access to online databases and message queues. BMPs run online, but like programs in a batch environment, they are started with job control language (JCL).

batch processing A method of running a program or a series of programs in which one or more records (a batch) are processed with little or no action from the user or operator.

BCP See also ”business continuity planning.”

bidirectional SRDF link If an SRDF group contains both primary and secondary volumes, write operations move data in both directions over the SRDF links for that group. This is called an SRDF bidirectional configuration.

binary data (1) Any data not intended for direct human reading. Binary data may contain unprintable characters, outside the range of text characters. (2) A type of data consisting of numeric values stored in bit patterns of 0s and 1s. Binary data can cause a large number to be placed in a smaller space of storage.

bridge Facilitates communication with LANs, SANs, and networks with dissimilar protocols.

buffer A portion of storage used to hold input or output data temporarily.

business continuityplanning

An enterprise-wide planning process that creates detailed procedures to be used in the case of a disaster. Business Continuity Plans take into account processes, people, facilities, systems, and external elements.

Ccache Random access electronic storage used to retain frequently used data

for faster access by the channel.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 363

Page 364: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

364

Glossary

cache slot Unit of cache equivalent to one track.

channel director The component in the Symmetrix system that interfaces between the host channels and data storage. It transfers data between the channel and cache.

controller ID Controller identification number of the director the disks are channeled to for EREP usage. There is only one controller ID for Symmetrix.

CKD Count Key Data, a data recording format employing self-defining record formats in which each record is represented by a count area that identifies the record and specifies its format, an optional key area that may be used to identify the data area contents, and a data area that contains the user data for the record. CKD can also refer to a set of channel commands that are accepted by a device that employs the CKD recording format.

cache slot Unit of cache equivalent to one track.

cache structure A Coupling Facility structure that enables high-performance sharing of cached data by multisystem applications in a sysplex. Applications can use a cache structure to implement several different types of caching systems, including a store-through or a store-in cache.

cache A random access electronic storage in selected storage controls used to retain frequently used data for faster access by the channel.

carriage controlcharacter

An optional character in an input data record that specifies a write, space, or skip operation.

cache Random access electronic storage used to retain frequently used data for faster access by the channel.

case-sensitive Pertaining to the ability to distinguish between uppercase and lowercase letters.

catalog (1) A directory of files and libraries, with reference to their locations. (2) To enter information about a file or a library into a catalog. (3) The collection of all dataset indexes that are used by the control program to locate a volume containing a specific dataset.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 365: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

CEMT The CICS-supplied transaction that allows checking of the status of terminals, connections, and other CICS entities from a console or from CICS terminal sessions.

central processor (CP) The part of the computer that contains the sequencing and processing facilities for instruction execution, initial program load, and other machine operations.

central processorcomplex (CPC)

A physical collection of hardware that includes main storage, one or more central processors, timers, and channels.

channel (1) A path along which signals can be sent; for example, data channel and output channel. (2) A functional unit, controlled by the processor, that handles the transfer of data between processor storage and local peripheral equipment.

channel connectionaddress (CCA)

The input/output (I/O) address that uniquely identifies an I/O device to the channel during an I/O operation.

channel director The component in the Symmetrix system that interfaces between the host channels and data storage. It transfers data between the channel and cache.

channel interface The circuitry in a storage control unit that attaches storage paths to a host channel.

channel-to- channel(CTC)

The communication (transfer of data) between programs on opposite sides of a channel-to-channel adapter (CTCA). See also ”channel-to- channel adapter (CTCA).”

channel-to- channeladapter (CTCA)

An input/output device that is used by a program in one system to communicate with a program in another system.

checkpoint (1) A place in a routine where a check, or a recording of data for restart purposes, is performed. (2) A point at which information about the status of a job and the system can be recorded so that the job step can be restarted later.

checkpoint write Any write to the checkpoint dataset. A general term for the primary, intermediate, and final writes that update any checkpoint dataset.

CIFS See also ”Common Internet File System (CIFS).”

CIM Common Information Model.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 365

Page 366: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

366

Glossary

CIM object manager The CIMOM is the core component to the implementation of the CIM specification. The CIMOM manages the CIM schema, instantiation, communication, and operation of the physical Providers that represent the CIM classes stored within the namespace of the local host.

CIMOM See also ”CIM object manager.”

client A function that requests services from a server, and makes them available to the user. A term used in an environment to identify a machine that uses the resources of the network. See also ”client-server.”

client authentication The verification of a client in secure communications where the identity of a server or browser (client) with whom you want to communicate is discovered. A sender's authenticity is demonstrated by the digital certificate issued to the sender.

client-server In TCP/IP, the model of interaction in distributed data processing in which a program at one site sends a request to a program at another site and awaits a response. The requesting program is called a client; the answering program is called a server.

client-serverrelationship

Any process that provides resources to other processes on a network is a server. Any process that employs these resources is a client. A machine can run client and server processes at the same time.

code point A 1-byte code representing one of 256 potential characters.

coexistence Two or more systems at different levels (for example, software, service, or operational levels) that share resources. Coexistence includes the ability of a system to respond in the following ways to a new function that was introduced on another system with which it shares resources: ignore a new function, terminate gracefully, support a new function.

command prefixfacility (CPF)

A z/OS facility that allows the system programmer to define and control subsystem and other command prefixes for use in a sysplex.

Common Internet FileSystem (CIFS)

Provides an open cross-platform mechanism for client systems to request file services from server systems over a network. It is based on the SMB protocol widely used by PCs and workstations running a wide variety of operating systems.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 367: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

compatible peer Symmetrix Enginuity level 5568 (and later) enables the Symmetrix system to support native IBM Peer-to-Peer Remote Copy (PPRC) commands through a Symmetrix feature called EMC Compatible Peer.

complementarymetal-oxide

semiconductor(CMOS)

A technology that combines the electrical properties of positive and negative voltage requirements to use considerably less power than other types of semiconductors.

concurrent SRDF Supports the ability for a single primary volume to be remotely mirrored to two secondary volumes concurrently. This feature is called Concurrent SRDF and is supported in ESCON, Fibre Channel, and Gigabit Ethernet SRDF configurations. Concurrent SRDF requires that each remote mirror operate in the same primary mode, either both synchronous or both semi-synchronous, but allows either (or both) volumes to be placed into one of the adaptive copy modes. See also ”adaptive copy mode.”

connection In TCP/IP, the path between two protocol applications that provides reliable data stream delivery service. In Internet communications, a connection extends from a TCP application on one system to a TCP application on another system.

consistency group A consistency group is a user-defined group of devices that can span multiple Symmetrix systems and, if needed, provide consistency protection. Consistency means that the devices within the group act in unison to preserve dependent-write consistency of a database that may be distributed across multiple Symmetrix systems or multiple RDF groups within a single Symmetrix.

consistent copy A copy of data entity (for example, a logical volume) that contains the contents of the entire data entity from a single instant in time.

console group In z/OS, a group of consoles defined in CNGRPxx, each of whose members can serve as an alternate console in console or hardcopy recovery or as a console to display synchronous messages.

console A user interface to a server. That part of a computer used for communication between the operator or user and the computer.

control unit Synonymous with device control unit.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 367

Page 368: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

368

Glossary

control unit address The high order bits of the storage control address, used to identify the storage control unit to the host system.

controller ID Controller identification number of the director to which the disks are channeled for EREP usage. There is only one controller ID for Symmetrix.

conversation A logical connection between two programs over an LU type 6.2 session that allows them to communicate with each other while processing a transaction.

conversational Pertaining to a program or a system that carries on a dialog with a terminal user, alternately accepting input and then responding to the input quickly enough for the user to maintain a train of thought.

copy group One or more copies of a page of paper. Each copy can have modifications, such as text suppression, page position, forms flash, and overlays.

count-key-data (CKD) Data recording format employing self-defining record formats in which each record is represented by a count area that identifies the record and specifies its format, an optional key area that may be used to identify the data area contents, and a data area that contains the user data for the record. CKD can also refer to a set of channel commands that are accepted by a device that employs the CKD recording format.

couple dataset Dataset that is created through the XCF couple dataset format utility and, depending on its designated type, is shared by some or all of the z/OS systems in a sysplex. See also ”sysplex.”

Coupling Facility A special logical partition that provides high-speed caching, list processing, and locking functions in a sysplex.

Coupling Facilitychannel

A high-bandwidth fiber optic channel that provides the high-speed connectivity required for data sharing between a coupling facility and the central processor complexes directly attached to it.

coupling services In a sysplex, the functions of XCF that transfer data and status between members of a group residing on one or more z/OS systems in the sysplex.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 369: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

cross-systemCoupling Facility

(XCF)

A component of z/OS that provides functions to support cooperation between authorized programs running within a sysplex.

cryptographic key A parameter that determines cryptographic transformations between plaintext and ciphertext.

cryptography The transformation of data to conceal its meaning.

Customer InformationControl System (CICS)

An IBM-licensed program that enables transactions entered at remote terminals to be processed concurrently by user-written application programs. It includes facilities for building, using, and maintaining databases.

CWDM See also ”WDM.”

DDASD Direct access storage device, a device that provides nonvolatile

storage of computer data and random access to that data.

data availability Access to any and all user data by the application.

delayed fast write There is no room in cache for the data presented by the write operation.

destage The asynchronous write of new or updated data from cache to disk device.

device A uniquely addressable part of the Symmetrix system that consists of a set of access arms, the associated disk surfaces, and the electronic circuitry required to locate, read, and write data. See also ”volume.”

device address The hexadecimal value that uniquely defines a physical I/O device on a channel path in an MVS environment. See also ”unit address.”

device number The value that logically identifies a disk device in a string.

diagnostics System level tests or firmware designed to inspect, detect, and correct failing components. These tests are comprehensive and self-invoking.

director The component in the Symmetrix system that allows Symmetrix to transfer data between the host channels and disk devices. See also ”channel director.”

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 369

Page 370: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

370

Glossary

disk director The component in the Symmetrix system that interfaces between cache and the disk devices.

dual-initiator A Symmetrix feature that automatically creates a backup data path to the disk devices serviced directly by a disk director, if that disk director or the disk management hardware for those devices fails.

dynamic sparing A Symmetrix feature that automatically transfers data from a failing disk device to an available spare disk device without affecting data availability. This feature supports all non-mirrored devices in the Symmetrix system.

daemon A program that runs unattended to perform a standard service.

DASD See also ”direct access storage device (DASD).”

data availability Access to any and all user data by the application.

data definition name The name of a data definition (DD) statement, which corresponds to a data control block that contains the same name. Abbreviated as ddname.

data definition (DD)statement

A job control statement that describes a dataset associated with a particular job step.

data integrity The condition that exists as long as accidental or intentional destruction, alteration, or loss of data does not occur.

data in transit The update data on application system DASD volumes that is being sent to the recovery system for writing to DASD volumes on the recovery system.

dataset The major unit of data storage and retrieval, consisting of a collection of data in one of several prescribed arrangements and described by control information to which the system has access.

dataset label (1) Collection of information that describes the attributes of a dataset and is normally stored on the same volume as the dataset. (2) General term for dataset control blocks and tape dataset labels.

dataset separatorpages

Those pages of printed output that delimit datasets.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 371: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

data sharing Ability of concurrent subsystems (such as DB2 or IMS DB) or application programs to directly access and change the same data, while maintaining data integrity.

data stream (1) All information (data and control commands) sent over a data link usually in a single read or write operation. (2) Continuous stream of data elements being transmitted, or intended for transmission, in character or binary-digit form, using a defined format.

DATABASE 2 (DB2) Relational database management system. DB2 Universal Database is the relational database management system that is Web-enabled with Java support.

DB2 data sharinggroup

Collection of one or more concurrent DB2 subsystems that directly access and change the same data while maintaining data integrity.

deallocate To release a resource that is assigned to a specific task.

default A value, attribute, or option that is assumed when no alternative is specified by the user.

delayed fast write The execution of write I/Os in the Symmetrix when the system has to free up some space in cache, by writing from cache to disk, before accepting another write from a host system.

Delta Set Extension(DSE)

Beginning with Enginuity version 5772, there is an additional option for managing the buffering of SRDF/A delta set data: SRDF/A Delta Set Extension (DSE). DSE provides a mechanism for augmenting the cache-based delta set buffering mechanism of SRDF/A with a disk-based buffering ability. This extended delta set buffering ability may allow SRDF/A to ride through larger or longer SRDF/A throughput imbalances than would be possible with cache-based delta set buffering alone. See also ”SRDF/Asynchronous (SRDF/A).”

dependent-writeconsistency

Data state where data integrity is guaranteed by dependent-write I/Os embedded in application logic. Database management systems are good examples of applications that utilize the dependent-write consistency strategy. Database management systems must devise protection against abnormal termination in order to successfully recover from one. The most common technique used is to guarantee that a dependent write cannot be issued until a predecessor write has completed. Typically the dependent write is a data or index write while the predecessor write is a write to the log. Because the write to the log must be completed prior to issuing the dependent write, the

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 371

Page 372: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

372

Glossary

application thread is synchronous to the log write (that is, it waits for that write to complete prior to continuing). The result of this kind of strategy is a dependent-write consistent database.

dependent-write I/O An I/O that cannot be issued until a related predecessor I/O has completed. Most applications, and in particular database management systems (DBMS), have embedded dependent-write logic to ensure data integrity in the event of a failure in the host or server processor, software, storage subsystem, or if an environmental power failure occurs. See also ”dependent-write consistency.”

destination node Node that provides application services to an authorized external user.

device address The ESA/390 term for the field of an ESCON device-level frame that selects a specific device on a control unit image. The one or two leftmost digits are the address of the channel to which the device is attached. The two rightmost digits represent the unit address.

device control unit (1) Hardware device that controls the reading, writing, or displaying of data at one or more input/output devices or terminals. (2) Hexadecimal value that uniquely defines a physical I/O device on a channel path in an MVS environment. See also ”unit address.”

device driver A program that enables a computer to communicate with a specific device, for example, a disk drive.

device number (1) ESA/390 term for a four-hexadecimal-character identifier, for example, 13A0 that you associate with a device to facilitate communication between the program and the host operator. (2) The device number that you associate with a subchannel.

Device SupportFacilities Program

(ICKDSF)

A program used to initialize DASD at installation and provide media maintenance.

device supportfacilities program

(ICKDSF)

A program used to initialize Symmetrix at installation and provide media maintenance.

device type General name for a kind of device; for example, 3390.

diagnostics System-level tests or firmware designed to inspect, detect, and correct failing components. These tests are comprehensive and self-invoking.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 373: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

direct access storagedevice (DASD)

Device in which the access time is effectively independent of the location of the data. This term is common in the z/OS environment to designate a disk or z/OS volume.

director Component in the Symmetrix system that allows Symmetrix to transfer data between the host channels and disk devices. See also ”channel director,” and “disk director.”

directory (1) Type of file containing the names and controlling information for other files or other directories. Directories can contain subdirectories that can contain subdirectories of their own. (2) File that contains directory entries. No two directory entries in the same directory can have the same name (POSIX.1). (3) File that points to files and to other directories. (4) Index used by a control program to locate blocks of data that are stored in separate areas of a dataset in direct access storage.

disaster recovery The process of restoring a previous copy of the data and applying logs or other necessary processes to that copy to bring it to a known point of consistency.

disaster restart The process of restarting dependent-write consistent copies of data and applications, using the implicit application of DBMS recovery logs during DBMS initialization to bring the data and application to a transactional point of consistency. If a database is shut down normally, the process of getting to a point of consistency during restart requires minimal work. If the database abnormally terminates, the restart process takes longer depending on the number and size of in-flight transactions at the time of termination. An image of the database is created using the EMC consistency technology while the database is running, and is done without conditioning the database. The database, is in a dependent-write consistent data state, which is similar to that created by a local power failure. This is also known as a DBMS restartable image. The restart of this image transforms it to a transactionally consistent data state by completing committed transactions and rolling back uncommitted transactions during the normal database initialization process.

disk director The component in the Symmetrix system that interfaces between cache and the disk devices.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 373

Page 374: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

374

Glossary

DLL filter A filter that provides one or more of these functions in a dynamic load library - init(), prolog(), process(), epilog(), and term(). See cfilter.h and cfilter.c in the /usr/lpp/Printsrv/samples/ directory for more information.

domino modes Symmetrix SRDF domino modes effectively stop all write operations to both primary and secondary volumes if all mirrors of a primary or secondary device fail, or if any remote I/O cannot be delivered to a secondary volume. If all SRDF links in a link group become unavailable, while such a shutdown temporarily halts production processing, domino modes can prevent data integrity exposure caused by rolling disasters.

dotted decimalnotation

The syntactical representation for a 32-bit integer that consists of four 8-bit numbers written in base 10 with periods (dots) separating them. It is used to represent IP addresses.

double-bytecharacter set (DBCS)

A set of characters in which each character is represented by a two-byte code. Languages such as Japanese, Chinese, and Korean, which contain more symbols than can be represented by 256 code points, require double-byte character sets. Because each character requires two bytes, the typing, display, and printing of DBCS characters requires hardware and programs that support DBCS. Contrast with single-byte character set.

drain Allowing a printer to complete its current work before stopping the device.

drive sparing Symmetrix DMX systems have a disk sparing functionality that reserves drives as standby spares. These drives are not user-addressable. Sparing increases data availability without affecting performance. Symmetrix DMX systems support both dynamic and permanent sparing functionalities. See also ”dynamic sparing,” and “permanent member sparing.”

dual copy A high availability function made possible by the nonvolatile storage in cached IBM storage controls. Dual copy maintains two functionally identical copies of designated DASD volumes in the logical storage subsystem and automatically updates both copies every time a write operation is issued to the dual copy logical volume.

dual initiator A Symmetrix feature that automatically creates a backup data path to the disk devices serviced directly by a disk director, if that disk director or the disk management hardware for those devices fails.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 375: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

dual-directional SRDFlink

For an ESCON-based extended-distance SRDF configuration (for example, using E1, E3, T1, T3, and ATM links and/or IP network connections) that require data to move in two directions, a dual-directional link configuration may be required. For more information, consult the current EMC Symmetrix Remote Data Facility (SRDF) Connectivity Guide. With a dual-directional configuration, multiple SRDF groups are used; some groups send data in one direction, while other groups send data in the opposite direction.

duplex pair A volume comprised of two physical devices within the same or different storage subsystems.

DWDM See also ”WDM.”

dynamic pathreconnect (DPR)

A function that allows disconnected I/O operations with Symmetrix to reconnect over any available channel path rather than be limited to the one on which the I/O operation was started. This function is available only on System 370/XA, System 370/ESA, and System 390/ESA systems.

dynamic sparing Dynamic sparing copies the contents of a failing disk to an available spare without any interruption in I/O processing. Dynamic sparing provides incremental protection against failure of a second disk during the time a disk is taken offline and when it is ultimately replaced and resynchronized. Dynamic sparing is used in combination with RAID 1, RAID 5, and unprotected volumes. See also ”drive sparing,” ”permanent member sparing,” and “spare pool.”

dynamic SRDFdevices

The Dynamic SRDF functionality introduced Enginuity level 5568, and enables the user to create, delete, and swap SRDF pairs, using EMC host-based SRDF control software, while the Symmetrix system is in operation. Dynamic SRDF allows the user to create SRDF device pairs from non-SRDF devices, and then synchronize and manage them in the same way as static SRDF pairs.

dynamic SRDF group At Enginuity level 5669 or above, a user can dynamically create empty SRDF groups and dynamically associate the groups with Fibre Channel or GigE SRDF directors. Removing dynamic SRDF groups is also possible. Both of these operations are accomplished using EMC host-based SRDF control software. Dynamic SRDF groups created through this method are persistent through Symmetrix power on or IMPL.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 375

Page 376: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

376

Glossary

EESCON Enterprise Systems Connection, a set of IBM and vendor products

that connect mainframe computers with each other and with attached storage, locally attached workstations, and other devices using optical fiber technology and dynamically modifiable switches called ESCON Directors. See also ”ESCON director.”

ESCON director Device that provides a dynamic switching function and extended link path lengths (with XDF capability) when attaching an ESCON channel to a Symmetrix serial channel interface.

E_Port An E_Port is an inter-switch expansion port that connects to the E_Port of another Fibre Channel switch, in order to build a larger switched fabric.

enterprise network A geographically dispersed network under the backing of one organization.

Enterprise SystemsConnection (ESCON)

A set of products and services that provides a dynamically connected environment using optical cables as a transmission medium.

entry area In z/OS, the part of a console screen where operators can enter commands or command responses.

EREP program The program that formats and prepares reports from the data contained in the Error Recording Data Set (ERDS).

ESCON director Device that provides a dynamic switching function and extended link path lengths (with XDF capability) when attaching an ESCON channel to a Symmetrix serial channel interface.

ESCON.protocol Enterprise Systems Connection Architecture. A zSeries 900 and S/390 computer peripheral interface. The I/O interface utilizes S/390 logical protocols over a serial interface that configures attached units to a communication fabric.

ETR External Time Reference. See also ”sysplex-timer.”

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 377: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

event Any significant change in the state of a system resource, network resource, or network application. An event can be generated for a problem, for the resolution of a problem, or for the successful completion of a task. Examples of events are: the normal starting and stopping of a process, the abnormal termination of a process, and the malfunctioning of a server.

extended MCSconsole

In z/OS, a console other than an MCS console from which operators or programs can issue system commands and receive messages. An extended MCS console is defined through an OPERPARM segment.

extended remotecopy (XRC)

A hardware- and software-based remote copy service option that provides an asynchronous volume copy across storage subsystems for disaster recovery, device migration, and workload migration.

Ffast write In Symmetrix, a write operation at cache speed that does not require

immediate transfer of data to disk. The data is written directly to cache and is available for later destaging.

FBA Fixed Block Architecture, disk device data storage format using fixed-size data blocks.

FRU Field Replaceable Unit, a component that is replaced or added by service personnel as a single entity.

frame Data packet format in an ESCON environment. See also ”ESCON.”

F_Port A fabric port that is not loop-capable. It is used to connect an N_Port to a switch.

fabric Fibre Channel employs a fabric to connect devices. A fabric can be as simple as a single cable connecting two devices. The term is often used to describe a more complex network utilizing hubs, switches, and gateways.

FarPoint FarPoint is an SRDF feature used only with ESCON extended distance solutions (and certain ESCON campus solutions) to optimize the performance of the SRDF links. This feature works by allowing each SRDF director to transmit multiple I/Os, in series, over each SRDF link.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 377

Page 378: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

378

Glossary

fast write In Symmetrix, a write operation at cache speed that does not require immediate transfer of data to disk. The data is written directly to cache and is available for later destaging.

FC See also ”Fibre Channel.”

FCIP See also ”Fibre Channel over IP.”

FCP See also ”Fibre Channel protocol.”

FCS See also ”Fibre Channel standard.”

fiber optic The medium and the technology associated with the transmission of information along a glass or plastic wire or fiber.

Fibre Channel A technology for transmitting data between computer devices at a data rate of up to 4 Gb/s. It is especially suited for connecting computer servers to shared storage devices and for interconnecting storage controllers and drives.

Fibre Channelarbitrated loop

A reference to the FC-AL standard, a shared gigabit media for up to 127 nodes, one of which can be attached to a switch fabric. See ”arbitrated loop.” See also ”loop topology.”

Fibre Channel over IP Fibre Channel over IP is defined as a tunneling protocol for connecting geographically distributed Fibre Channel SANs transparently over IP networks.

Fibre Channelprotocol

The serial SCSI command protocol used on Fibre Channel networks.

Fibre Channelstandard

An ANSI standard for a computer peripheral interface. The I/O interface defines a protocol for communication over a serial interface that configures attached units to a communication fabric. Refer to ANSI X3.230-199x.

FICON An I/O interface based on the Fibre Channel architecture. In this new interface, the ESCON protocols have been mapped to the FC-4 layer, that is, the Upper Level Protocol layer, of the Fibre Channel Protocol. It is used in the S/390 and z/Series environments.

field replaceable unit(FRU)

A component that is replaced or added by service personnel as a single entity.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 379: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

file system An individual file system on a host. This is the smallest unit that can be monitored and extended. Policy values defined at this level override those that might be defined at higher levels.

fixed utility volume A simplex volume assigned by the storage administrator to a logical storage subsystem to serve as working storage for XRC functions on that storage subsystem.

FL_Port A fabric port that is loop capable. It is used to connect NL_Ports to the switch in a loop configuration.

FlashCopy Local copy option that provides an online point-in-time copy of data.

floating utility volume Any volume of a pool of simplex volumes assigned by the storage administrator to a logical storage subsystem to serve as dynamic storage for XRC functions on that storage subsystem. frame. For a System/390 microprocessor cluster, a frame contains one or two central processor complexes (CPCs), support elements, and AC power distribution.

frame A data packet format in an ESCON environment.

Ggatekeeper A small logical volume on a Symmetrix storage subsystem used to

pass commands from a host to the Symmetrix storage subsystem. Gatekeeper devices are configured on standard Symmetrix disks.

GB Gigabyte, 109 bytes.

gateway In the SAN environment, a gateway connects two or more different remote SANs with each other. A gateway can also be a server on which a gateway component runs.

gateway node A node that is an interface between networks.

generalized tracefacility (GTF)

Like system trace, GTF gathers information used to determine and diagnose problems that occur during system operation. Unlike system trace, however, GTF can be tailored to record very specific system and user program events.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 379

Page 380: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

380

Glossary

global accesschecking

The ability to allow an installation to establish an in-storage table of default values for authorization levels for selected resources. global resource serialization complex. One or more z/OS systems that use global resource serialization to serialize access to shared resources (such as datasets on shared DASD volumes).

global mirror A hardware-based remote copy option that provides asynchronous volume copy across storage subsystems for disaster recovery, device migration, and workload migration.

global resourceserialization

A function that provides a z/OS serialization mechanism for resources (typically datasets) across multiple z/OS images.

group A collection of RACF users who can share access authorities for protected resources.

Hhead and disk

assemblyA field replaceable unit in the Symmetrix system containing the disk and actuator.

home address The first field on a CKD track that identifies the track and defines its operational status. The home address is written after the index point on each track.

hyper-volumeextension

The ability to define more than one logical volume on a single physical disk device making use of its full formatted capacity. These logical volumes are user-selectable in size. The minimum volume size is one cylinder and the maximum size depends on the disk device capacity and the emulation mode selected.

hardcopy log A permanent record of system activity in systems with multiple console support or a graphic console.

hardware Physical equipment, as opposed to the computer program or method of use; for example, mechanical, magnetic, electrical, or electronic devices. See also ”software.”

hardwareconfiguration dialog

In z/OS, a panel program that is part of the hardware configuration definition. The program allows an installation to define devices for z/OS system configurations.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 381: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

HardwareManagement

Console (HMC)

A console used to monitor and control hardware such as the System/390 microprocessors.

hardware zoning The members of a hardware zone are based on the physical ports on the fabric switch. Zoning can be implemented in the following configurations: one to one, one to many, and many to many.

HBA See also ”host bus adapter.”

highly parallel Refers to multiple systems operating in parallel, each of which can have multiple processors. See also ”n-way.”

home address (HA) The first field on a CKD track that identifies the track and defines its operational status. The home address is written after the index point on each track.

host Any system that has at least one Internet address associated with it. A host with multiple network interfaces can have multiple Internet addresses associated with it. This is also referred to as a server.

host bus adapter A Fibre Channel HBA connection that allows a workstation to attach to the SAN network.

host not ready In this state, the volume responds not ready to the host for all read and write operations to that volume.

hub A Fibre Channel device that connects up to 126 nodes into a logical loop. All connected nodes share the bandwidth of this one logical loop. Hubs automatically recognize an active node and insert the node into the loop. A node that fails or is powered off is automatically removed from the loop.

hypervolume A user-defined storage device allocated within a Symmetrix physical disk.

hypervolumeextension

The ability to define more than one logical volume on a single physical disk device, thus making use of its full formatted capacity. These logical volumes are user-selectable in size. The minimum volume size is one cylinder and the maximum size depends on the disk device capacity and the emulation mode selected.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 381

Page 382: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

382

Glossary

IID Identifier, a sequence of bits or characters that identifies a program,

device, controller, or system.

IML Initial microcode program loading.

index marker Indicates the physical beginning and end of a track.

index point The reference point on a disk surface that determines the start of a track.

INLINES An EMC-provided host-based Cache Reporter utility for viewing short and long term cache statistics at the system console.

I/O device An addressable input/output unit, such as a disk device.

I/O group A group containing two SVC nodes defined by the configuration process. The nodes in the I/O group provide access to the vDisks in the I/O group. See also ”SVC.”

ICAT IBM Common Information Model [CIM] Agent Technology.

ICKDSF See also ”Device Support Facilities Program (ICKDSF).”

identifier (ID) A sequence of bits or characters that identifies a program, device, controller, or system.

IFCP See also ”Internet Fibre Channel Protocol (IFCP).”

image A single occurrence of the z/OS operating system that has the ability to process work.

IML Initial microcode program loading.

IMS DB data sharinggroup

A collection of one or more concurrent IMS DB subsystems that directly access and change the same data while maintaining data integrity.

index marker Indicates the physical beginning and end of a track.

index point The reference point on a disk surface that determines the start of a track.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 383: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

initial program load(IPL)

The initialization procedure that causes an operating system to begin operation.

instruction line In z/OS, the part of the console screen that contains messages about console control and input errors.

interactive job Non-batch job requiring input from the user as necessary to continue processing. An example of an interactive job is the Time Sharing Option ( TSO ) that provides interactive communications with the z/OS operating system. TSO allows a user or programmer to launch an application from a terminal and interactively work with it.

internal reader A facility that transfers jobs to the job entry subsystem (JES2 or JES3).

Internet Fibre ChannelProtocol (IFCP)

The Internet Fibre Channel Protocol specification defines IFCP as a gateway-to-gateway protocol for the implementation of a Fibre Channel fabric in which TCP/IP switching and routing elements replace Fibre Channel components.

internet protocol A protocol used to route data from its source to its destination in an Internet environment.

internet SCSI Internet SCSI encapsulates SCSI commands into TCP packets; therefore enabling the transport of I/O block data over IP networks.

IP See also ”internet protocol.”

iSCSI See also ”internet SCSI.”

JJava A programming language that enables application developers to

create object-oriented programs that are very secure, portable across different machine and operating system platforms, and dynamic enough to allow expandability.

Java runtimeenvironment

The underlying, invisible system on a computer that runs the applets that the internet browser passes to it.

Java Virtual Machine The execution environment within which Java programs run. The Java Virtual Machine is described by the Java Machine Specification which is published by Sun Microsystems. Because Tivoli Kernel Services is based on Java, nearly all ORB and component functions execute in a Java Virtual Machine.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 383

Page 384: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

384

Glossary

JBOD Just a Bunch Of Disks. A disk group configured without the disk redundancy of the RAID arrangement. When configured as JBOD, each disk in the disk group is a rank in itself.

JES common couplingservices

A set of macro-driven services that provide the communication interface between JES members of a sysplex. Synonymous with JES XCF.

JES XCF JES cross-system coupling services. The z/OS component, common to both JES2 and JES3, that provides the cross-system coupling services to either JES2 multi-access spool members or JES3 complex members, respectively.

JES2 A z/OS subsystem that receives jobs into the system, converts them to internal format, selects them for execution, processes their output, and purges them from the system. In an installation with more than one processor, each JES2 processor independently controls its job input, scheduling and output processing.

JES2 multi-accessspool configuration

A multiple z/OS system environment that consists of two or more JES2 processors sharing the same job queue and spool.

JES3 A z/OS subsystem that receives jobs into the system, converts them to internal format, selects them for execution, processes their output, and purges them from the system. In complexes that have several loosely-coupled processing units, the JES3 program manages processors so that the global processor exercises centralized control over the local processors and distributes jobs to them using a common job queue.

JES3 complex A multiple z/OS system environment that allows JES3 subsystem consoles and MCS consoles with a logical association to JES3 to receive messages and send commands across systems.

job entry subsystem(JES)

A system facility for spooling, job queuing, and managing the scheduler work area.

job separator pages Those pages of printed output that delimit jobs.

journal A checkpoint dataset that contains work to be done. For XRC, the work to be done consists of all changed records from the primary volumes. Changed records are collected and formed into a “consistency group,” and then the group of updates is applied to the secondary volumes.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 385: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

JRE See also ”Java runtime environment.”

JVM See also ”Java Virtual Machine.”

KK Kilobyte, 1024 bytes.

keyword A part of a command operand or SYS1.PARMLIB statement that consists of a specific character string (such as NAME= on the CONSOLE statement of CONSOLxx).

Lleast recently used

algorithm (LRU)The algorithm used to identify and make available the cache space by removing the least recently used data.

logical volume A user-defined storage device. In the Model 5200, the user can define a physical disk device as one or two logical volumes.

long miss Requested data is not in cache and is not in the process of being fetched.

longitude redundancycode (LRC)

Exclusive OR (XOR) of the accumulated bytes in the data record.

least recently usedalgorithm (LRU)

The algorithm used to identify and make available cache space by removing the least recently used data.

Licensed InternalCode (LIC)

Microcode that IBM does not sell as part of a machine, but licenses to the customer. LIC is implemented in a part of storage that is not addressable by user programs. Some IBM products use it to implement functions as an alternative to hard-wired circuitry.

link address On an ESCON interface, the portion of a source, or destination address in a frame that ESCON uses to route a frame through an ESCON director. ESCON associates the link address with a specific switch port that is on the ESCON director. Equivalently, it associates the link address with the channel subsystem or controller link-level functions that are attached to the switch port.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 385

Page 386: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

386

Glossary

list structure A Coupling Facility structure that enables multisystem applications in a sysplex to share information organized as a set of lists or queues. A list structure consists of a set of lists and an optional lock table that can be used for serializing resources in the list structure. Each list consists of a queue of list entries.

local volumes Volumes that reside on a Symmetrix system, but do not participate in SRDF activity.

locality of reference Locality of reference in SRDF/A environments improves the efficiency of the SRDF network links. Even if there are multiple data updates (that is, repeated writes) in the same cycle, the systems send the data across the SRDF links only once.

locally not ready If the local primary SRDF volume fails, the host continues to recognize that volume as available for read/write operations as all reads and/or writes continue uninterrupted with the secondary (target, R2) volume in that remotely mirrored pair.

lock structure A Coupling Facility structure that enables applications in a sysplex to implement customized locking protocols for serialization of application-defined resources. The lock structure supports shared, exclusive, and application-defined lock states, as well as generalized contention management and recovery protocols.

LPAR A subset of the processor hardware that is defined to support an operating system. An LPAR contains resources (processors, memory, and input/output devices) and operates as an independent system. If hardware requirements are met, multiple logical partitions can exist within a system. See also ”logical partitioning.”

logical partitioning A function of an operating system that enables the creation of logical partitions.

logical subsystem The logical functions of a storage controller that allow one or more host I/O interfaces to access a set of devices. The controller aggregates the devices according to the addressing mechanisms of the associated I/O interfaces. One or more logical subsystems can exist on a storage controller. In general, the controller associates a given set of devices with only one logical subsystem.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 387: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

logical unit (LU) In SNA, a port through which an end user accesses the SNA network in order to communicate with another end user, and through which the end user accesses the functions provided by system services control points (SSCPs).

logical unit number(LUN)

A volume identifier that is unique among all storage servers. The LUN is synonymous with a physical disk drive or a SCSI device. For disk subsystems such as the IBM Enterprise Storage Server, a LUN is a logical disk drive (a unit of storage on the SAN that is available for assignment or unassignment to a host server). The LUNs are provided by the storage devices attached to the SAN.

logical volume A user-defined storage device.

long miss Requested data is not in cache and is not in the process of being fetched.

longitude redundancycode (LRC)

Exclusive OR (XOR) of the accumulated bytes in the data record.

loop topology In a loop topology, the available bandwidth is shared with all the nodes connected to the loop. If a node fails or is not powered on, the loop is out of operation. This can be corrected using a hub. A hub opens the loop when a new node is connected and closes it when a node disconnects. See ”Fibre Channel arbitrated loop” and arbitrated loop.

loosely coupled A multisystem structure that requires a low degree of interaction and cooperation between multiple z/OS images to process a workload. See also ”tightly coupled.”

LUN See also ”logical unit number (LUN).”

LUN assignmentcriteria

The combination of a set of LUN types, a minimum size, and a maximum size used for selecting a LUN during automatic assignment.

LUN masking This allows or blocks access to the storage devices on the SAN. Intelligent disk subsystems provide this kind of masking.

MMB Megabyte, 106 bytes.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 387

Page 388: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

388

Glossary

mirroring The Symmetrix maintains two identical copies of a designated volume on separate disks. Each volume automatically updates during a write operation. If one disk device fails, Symmetrix automatically uses the other disk device.

mirrored pair A logical volume with all data recorded twice, once on each of two different physical devices.

MAN See also ”metropolitan area network.”

managed object See also ”managed resource.”

managed resource A physical element to be managed.

managementinformation base

(MIB)

A logical database residing in the managed system which defines a set of MIB objects. A MIB is considered a logical database because actual data is not stored in it, but rather provides a view of the data that can be accessed on a managed system.

master consoleauthority

In a system or sysplex, a console defined with AUTH(MASTER) other than the master console from which all z/OS commands can be entered.

master console In a z/OS system or sysplex, the main console used for communication between the operator and the system from which all z/OS commands can be entered. The first active console with AUTH(MASTER) defined becomes the master console in a system or sysplex.

master trace A centralized data tracing facility of the master scheduler, used in servicing the message processing portions of z/OS.

MCS console A non-SNA device defined to z/OS that is locally attached to a z/OS system and is used to enter commands and receive messages.

media The disk surface on which data is stored.

member A specific function (one or more modules/routines) of a multisystem application that is defined to XCF and assigned to a group by the multisystem application. A member resides on one system in the sysplex and can use XCF services to communicate (send and receive data) with other members of the same group.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 389: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

message processingfacility (MPF)

A facility used to control message retention, suppression, and presentation.

message queue A queue of messages that are waiting to be processed or waiting to be sent to a terminal.

message text The part of a message consisting of the actual information that is routed to a user at a terminal or to a program.

metro mirror A hardware-based remote copy option that provides a synchronous volume copy across storage subsystems for disaster recovery, device migration, and workload migration.

metropolitan areanetwork

A network that connects nodes distributed over a metropolitan (city-wide) area as opposed to a local area (campus) or wide area (national or global).

MIB object A MIB object is a unit of managed information that specifically describes an aspect of a system. Examples are CPU utilization, software name, hardware type, and so on. A collection of related MIB objects is defined as a MIB.

MIB See also ”management information base (MIB).”

microprocessor A processor implemented on one or a small number of chips.

mirrored pair A logical volume with all data recorded twice, once on each of two different physical devices.

mirroring The Symmetrix maintains two identical copies of a designated volume on separate disks. Each volume automatically updates during a write operation. If one disk device fails, Symmetrix automatically uses the other disk device.

mirroring (RAID 1) Provides the highest level of performance and availability for all mission-critical and business-critical applications by maintaining a duplicate copy of a volume on two disk drives.

mixed complex A global resource serialization complex in which one or more of the systems in the global resource serialization complex are not part of a multisystem sysplex.

multiaccess spool(MAS)

A complex of multiple processors running z/OS and JES2 that share a common JES2 spool and JES2 checkpoint dataset.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 389

Page 390: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

390

Glossary

multiple consolesupport (MCS)

The operator interface in a z/OS system.

multiple devicemanager

The Multiple Device Manager is a software that has been designed to allow administrators to manage Storage Area Networks (SANs) and storage from a single console. It is now known as TotalStorage Productivity Center.

multiprocessing The simultaneous execution of two or more computer programs or sequences of instructions. See also ”parallel processing.”

multiprocessor (MP) ACPC that can be physically partitioned to form two operating processor complexes.

multisessionconsistency (MSC)

mode

Beginning with Enginuity 5x71 for mainframe and open systems, SRDF/A is supported in configurations where there are multiple primary Symmetrix systems and/or multiple primary Symmetrix SRDF groups connected to multiple secondary Symmetrix systems and/or multiple secondary Symmetrix SRDF groups. This is referred to as SRDF/A Multi-Session Consistency or SRDF/A MSC. SRDF/A MSC configurations can also support mixed open systems and mainframe data controlled within the same SRDF/A MSC session.

multisystemapplication

An application program that has various functions distributed across z/OS images in a multisystem environment.

multisystem consolesupport

Multiple console support for more than one system in a sysplex. Multisystem console support allows consoles on different systems in the sysplex to communicate with each other (send messages and receive commands).

multisystemenvironment

An environment in which two or more z/OS images reside in one or more processors, and programs on one image can communicate with programs on the other images.

multisystem sysplex A sysplex in which two or more z/OS images are allowed to be initialized as part of the sysplex.

NN_Port A node port. Fibre Channel-defined hardware entity at the end of a

link which provides the mechanisms necessary to transport information units to or from another node.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 391: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

NAS See also ”network attached storage.”

network attachedstorage

A NAS device is attached to a TCP/IP-based network (LAN or WAN), and accessed using CIFS and NFS-specialized I/O protocols for file access and file sharing.

Network File System(NFS)

A component of z/OS that allows remote access to z/OS host processor data from workstations, personal computers, or any other system on a TCP/IP network that is using client software for the Network File System protocol.

network topology A physical arrangement of nodes and interconnecting communication links in networks based on application requirements and geographical distribution of users.

NL_Port A node loop port. A node port that supports arbitrated loop devices.

nonstandard labels Labels that do not conform to American National Standard or IBM System/370 standard label conventions.

nucleus initializationprogram (NIP)

The stage of z/OS that initializes the control program; it allows the operator to request last-minute changes to certain options specified during initialization.

n-way The number (n) of CPs in a CPC. For example, a 6-way CPC contains six CPs.

Ooffline Pertaining to equipment or devices not under control of the

processor.

offline SRDF link SRDF link is offline if one or more of the following occurs: The remote adapter is offline—link disabled. The remote adapter is online but the link is offline (damaged or disconnected cable or other damaged hardware)—link disabled.

online Pertaining to equipment or devices under control of the processor.

online SRDF link SRDF link is online when the following occurs: The remote adapter is operational and enabled on both sides of the SRDF configuration. The Symmetrix systems are configured properly on both sides of the SRDF configuration. The external link infrastructure components are operational.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 391

Page 392: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

392

Glossary

open system A system whose characteristics comply with standards made available throughout the industry, and therefore can be connected to other systems that comply with the same standards.

operating system (OS) Software that controls the execution of programs and that may provide services such as resource allocation, scheduling, input/output control, and data management. Although operating systems are predominantly software, partial hardware implementations are possible.

operations log In z/OS, the operations log is a central record of communications and system problems for each system in a sysplex.

orphan data Data that occurs between the last, safe backup for a recovery system and the time when the application system experiences a disaster. This data is lost when either the application system becomes available for use, or when the recovery system is used in place of the application system.

Pphysical ID Physical identification number of the Symmetrix director for EREP

usage. This value automatically increments by one for each director installed in Symmetrix. This number must be unique in the mainframe system. It should be an even number. This number is referred to as the SCU_ID.

primary track The original track on which data is stored. See also ”alternate track.”

promotion The process of moving data from a track on the disk device to cache slot.

parallel processing The simultaneous processing of units of work by many servers. The units of work can be either transactions or subdivisions of large units of work (batch). See also ”highly parallel.”

Parallel Sysplex A sysplex that uses one or more coupling facilities.

partitionable CPC A CPC that can be divided into two independent CPCs. See also ”physical partition,” ”single-image (SI) mode,” and ”multiprocessor (MP).”

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 393: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

partitioned dataset(PDS)

A dataset on direct access storage that is divided into partitions, called members, each of which can contain a program, part of a program, or data.

partitioned dataset(PDS) assist

An IBM feature for 3990 Model 6 and 3990 Model 3 with Extended Platform units. PDS Assist improves performance on large, heavily-used partitioned datasets by modifying the directory search process.

partitioned datasetextended (PDSE)

A system-managed dataset that contains an indexed directory and members that are similar to the directory and members of partitioned datasets. A PDSE can be used instead of a partitioned dataset.

password A unique string of characters known to a computer system and to a user, who must specify the character string to gain access to a system and to the information stored within it.

peer-to-peer remotecopy (PPRC)

A hardware-based remote copy option that provides a synchronous volume copy across storage subsystems for disaster recovery, device migration, and workload migration.

permanent dataset A user-named dataset that is normally retained for longer than the duration of a job or interactive session. Contrast with temporary dataset.

permanent link loss If SRDF/A experiences a permanent link loss, it drops all of the devices on the link to not ready state. This results in all data in the active and inactive primary Symmetrix cycles (capture and transmit delta sets) being changed from write pending for the remote mirror to owed to the remote mirror. In addition, any new write I/Os on the primary Symmetrix system result in tracks being marked owed to the remote mirror. All of these tracks are owed to the secondary Symmetrix once the links are restored.

permanent membersparing

Permanent member sparing is a process that permanently replaces a failing drive with a spare drive using configuration change. The spare drive must be the same block size, capacity, speed, and in a location that conforms to the configuration rules for distributing mirrors. Permanent member sparing is used in combination with all protection types. The failed drive becomes a not ready spare in the spare pool and can be replaced at a later time. If the process cannot identify a spare in a good location, the dynamic sparing process will take place for RAID 1, RAID 5, and unprotected volumes. See also ”drive sparing,” ”dynamic sparing,” and ”spare pool.”

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 393

Page 394: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

394

Glossary

physical ID Physical identification number of the Symmetrix director for EREP usage. This value automatically increments by one for each director installed in Symmetrix. This number must be unique in the mainframe system. It should be an even number. This number is referred to as the SCU_ID.

physical partition Part of a CPC that operates as a CPC in its own right, with its own copy of the operating system.

physically partitioned(PP) configuration

A system configuration that allows the processor controller to use both central processor complex (CPC) sides as individual CPCs. The A-side of the processor controller controls side 0; the B-side of the processor controller controls side 1. Contrast with single-image (SI) configuration.

point of consistency A point in time to which data can be restored and recovered or restarted and maintain integrity for all data and applications.

port An endpoint for communication between applications, generally referring to a logical connection. A port provides queues for sending and receiving data. Each port has a port number for identification. When the port number is combined with an Internet address, it is called a socket address.

port zoning In Fibre Channel environments, the grouping together of multiple ports to form a virtual private storage network. Ports that are members of a group or zone can communicate with each other but are isolated from ports in other zones. See also ”LUN masking,” and “subsystem masking.”

PPRC See also ”metro mirror” and ”peer-to-peer remote copy (PPRC).”

primary device One device of a dual copy or remote copy volume pair. All channel commands to the copy logical volume are directed to the primary device. The data on the primary device is duplicated on the secondary device. See also ”secondary device.”

primary SRDF volumes A volume that contains production data that is mirrored in a different Symmetrix system. Primary volumes are also referred to as source or R1 volumes. Updates to a primary volume are automatically mirrored to a secondary volume in the remote Symmetrix system. See also ”secondary SRDF volume.”

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 395: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

primary track The original track on which data is stored. See also ”alternate track.”

Print Services Facility(PSF)

The access method that supports the 3800 Printing Subsystem Models 3 and 8. PSF can interface either directly to a user's application program or indirectly through the job entry subsystem (JES) of z/OS.

printer A device that writes output data from a system on paper or other media.

processor controller Hardware that provides support and diagnostic functions for the central processors.

ProcessorResource/SystemsManager (PR/SM)

The feature that allows a processor to use several z/OS images simultaneously and provides logical partitioning capability. See also ”LPAR.”

profile Data that describes the significant characteristics of a user, a group of users, or one or more computer resources.

program function key(PFK)

A key on the keyboard of a display device that passes a signal to a program to call for a particular program operation.

program status word(PSW)

A double word in main storage used to control the order in which instructions are executed, and to hold and indicate the status of the computing system in relation to a particular program.

protocol The set of rules governing the operation of functional units of a communication system if communication is to take place. Protocols can determine low-level details of machine-to-machine interfaces, such as the order in which bits from a byte are sent. They can also determine high-level exchanges between application programs, such as file transfer.

Rread hit Data requested by the read operation is in cache.

read miss Data requested by the read operation is not in cache.

record zero The first record after the home address.

RAID Redundant array of inexpensive or independent disks. A method of configuring multiple disk drives in a storage subsystem for high availability and high performance.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 395

Page 396: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

396

Glossary

RAID 0 A protection method where data is striped across several disks to increase performance. Unless combined with RAID 1, does not natively provide protection from data loss due to drive failure. See also ”RAID 10.”

RAID 1 A protection method that provides the highest level of performance and availability for all mission-critical and business-critical applications by maintaining a duplicate copy of a volume on two disk drives. See also ”mirroring.”

RAID 5 A protection method that provides high performance with automatic striping across hypervolumes. Lost hypervolumes are regenerated from remaining members. RAID 5 is configured in (3+1) and (7+1) groups. RAID 5 technology stripes data and distributes parity blocks across all the disk drives in the RAID group.

RAID 6 A protection method that supports the ability to rebuild data in the event that two drives within the RAID group fail.

RAID 10 A protection method that combines RAID 1 and RAID 0; used in mainframe environments.

read access Permission to read information.

read hit Data requested by the read operation is in cache.

read miss Data requested by the read operation is not in cache.

read/write volume A state indicating the volume is available for read/write operations.

ready volume A state indicating that the volume is available for read/write operations.

record zero The first record after the home address.

recording format For a tape volume, the format of the data on the tape, for example, 18, 36, 128, or 256 tracks.

recovery The process of rebuilding data after it has been damaged or destroyed, often by using a backup copy of the data or by reapplying transactions recorded in a log.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 397: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

recovery system A system used in place of a primary application system that is no longer available for use. Data from the application system must be available for use on the recovery system. This is usually accomplished through backup and recovery techniques, or through various DASD copying techniques, such as remote copy.

remote operations Operation of remote sites from a host system.

remote volumemirroring

See also ”SRDF/Asynchronous (SRDF/A)” and ”SRDF/Synchronous (SRDF/S).”

remotely not ready If primary SRDF volumes are remotely not ready, write updates will not propagate to the secondary volumes. Changes to the primary volumes are marked invalid as owed to the secondary volumes.

reserve capacityenhancement

SRDF/A Reserve Capacity enhances SRDF/A's ability to maintain an operational state when encountering network resource constraints that would have previously suspended SRDF/A operations. With SRDF/A Reserve Capacity functions enabled, additional resource allocation can be applied to address temporary workload peaks, periods of network congestion, or even transient network outages. See also ”transmit idle” and ”Delta Set Extension (DSE).”

Resource AccessControl Facility (RACF)

A security manager for z/OS that provides for access control by identifying and verifying the users to the system, authorizing access to protected resources, logging the detected unauthorized attempts to enter the system and logging the detected accesses to protected resources.

restore A process that reinstates a prior copy of the data.

restructured extendedexecutor (REXX)

A general-purpose, procedural language for end-user personal programming, designed for ease by both casual general users and computer professionals. It is also useful for application macros. REXX includes the capability of issuing commands to the underlying operating system from these macros and procedures. Features include powerful character-string manipulation, automatic data typing, manipulation of objects familiar to people, such as words, numbers, and names, and built-in interactive debugging.

resynchronization A track image copy from the primary volume to the secondary volume of only the tracks which have changed since the volume was last in duplex mode.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 397

Page 398: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

398

Glossary

rolling disaster A rolling disaster is a series of events that lead up to a complete disaster. For example, the loss of a communication link occurs prior to a site failure. Most disasters are rolling disasters; their duration may be only milliseconds or up to hours.

routing The assignment of the communications path by which a message will reach its destination.

routing code A code assigned to an operator message and used to route the message to the proper console.

RVM See also ”remote volume mirroring.”

Sscrubbing The process of reading, checking the error correction bits, and writing

corrected data back to the source.

SCSI adapter Card in the Symmetrix system that provides the physical interface between the disk director and the disk devices.

short miss Requested data is not in cache, but is in the process of being fetched.

SSID For 3990 storage control emulations, this value identifies the physical components of a logical DASD subsystem. The SSID must be a unique number in the host system. It should be an even number and start on a zero boundary.

stage The process of writing data from a disk device to cache.

storage control unit The component in the Symmetrix system that connects Symmetrix to the host channels. It performs channel commands and communicates with the disk directors and cache. See also ”channel director.”

string A series of connected disk devices sharing the same disk director.

SAN agent A software program that communicates with the manager and controls the subagents. This component is largely platform independent. See also ”subagent.”

SAN file system Allows computers attached using a SAN to share data. It typically separates the actual file data from the metadata, using the LAN path to serve the metadata, and the SAN path for the file data.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 399: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

SAN integration server(SIS)

A prepackaged system comprising an SVC, backend storage, SAN and Ethernet Switches, and a master controller assembled and preconfigured in a rack.

SAN volume controller(SVC)

A SAN appliance designed for attachment to a variety of host computer systems, which carries out block-level virtualization of disk storage.

SAN See also ”storage area network.”

scrubbing The process of reading, checking the error correction bits, and writing corrected data back to the source.

SCU_ID For 3880 storage control emulations, this value uniquely identifies the storage director without respect to its selection address. It identifies to the host system, through the EREP, the director detecting the failing subsystem component. This value automatically increments by one for each director installed. The SCU_ID must be a unique number in the host system. It should be an even number and start on a zero boundary.

secondary device One of the devices in a dual copy or remote copy logical volume pair that contains a duplicate of the data on the primary device. Unlike the primary device, the secondary device may only accept a limited subset of channel commands. See also ”primary device.”

secondary SRDFvolume

A volume that contains a mirrored copy of data from a primary SRDF volume. secondary volumes are also referred to as target or R2 volumes. See also ”primary SRDF volumes.”

Semi-synchronousmode

Used mainly for an extended distance SRDF solution, semi-synchronous mode allows the primary and secondary volumes to be out of synchronization by one write I/O operation. Data must be successfully stored in the Symmetrix system containing the primary volume before an acknowledgement is sent to the local host. This mode is not supported for FICON or at Enginuity 5772 and higher.

serial storagearchitecture

An IBM standard for a computer peripheral interface. The interface uses a SCSI logical protocol over a serial interface that configures attached targets and initiators in a ring topology.

server A program running on a mainframe, workstation, or file server that provides shared services. This is also referred to as a host.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 399

Page 400: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

400

Glossary

shared DASD option An option that enables independently operating computing systems to jointly use common data residing on shared direct access storage devices.

shared storage Storage within a storage facility that is configured such that multiple homogeneous or divergent hosts can concurrently access the storage. The storage has a uniform appearance to all hosts. The host programs that access the storage must have a common model for the information on a storage device. Programs must be designed to handle the effects of concurrent access.

short miss Requested data is not in cache, but is in the process of being fetched.

side Partition of a CPC.

simple networkmanagement

protocol (SNMP)

A protocol designed to give a user the capability to remotely manage a computer network by polling and setting terminal values and monitoring network events.

single point of control The characteristic a sysplex displays when a user can accomplish a given set of tasks from a single workstation, even if multiple IBM and vendor products are needed to accomplish that particular set of tasks.

single system image The characteristic a product displays when multiple images of the product can be viewed and managed as one image.

single-image (SI)mode

A mode of operation for a multiprocessor (MP) system that allows it to function as one CPC. By definition, a uniprocessor (UP) operates in single-image mode. Contrast with physically partitioned (PP) configuration.

single-system sysplex A sysplex in which only one z/OS system is allowed to be initialized as part of the sysplex. In a single-system sysplex, XCF provides XCF services on the system but does not provide signaling services between z/OS systems. See also ”multisystem sysplex.”

SIS See also ”SAN integration server (SIS).”

small computersystem interface

(SCSI)

An ANSI standard for a logical interface to computer peripherals and for a computer peripheral interface. The interface utilizes a SCSI logical protocol over an I/O interface that configures attached targets and initiators in a multidrop bus topology.

SNMP See also ”simple network management protocol (SNMP).”

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 401: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

SNMP agent An implementation of a network management application which is resident on a managed system. Each node that is to be monitored or managed by an SNMP manager in a TCP/IP network, must have an SNMP agent resident. The agent receives requests to either retrieve or modify management information by referencing MIB objects. MIB objects are referenced by the agent whenever a valid request from an SNMP manager is received. See also ”simple network management protocol (SNMP).”

SNMP manager A managing system that executes a managing application or suite of applications. These applications depend on MIB objects for information that resides on the managed system.

SNMP trap A message that is originated by an agent application to alert a managing application of the occurrence of an event.

software zoning Is implemented within the Simple Name Server (SNS) running inside the fabric switch. When using software zoning, the members of the zone can be defined with a node WWN, port WWN, or physical port number. Usually the zoning software also allows you to create symbolic names for the zone members and for the zones themselves.

software (1) All or part of the programs, procedures, rules, and associated documentation of a data processing system. (2) A set of programs, procedures, and, possibly, associated documentation concerned with the operation of a data processing system. For example, compilers, library routines, manuals, circuit diagrams. See also ”hardware.”

spare drive Symmetrix systems have a disk sparing functionality that reserves drives as standby spares. These drives are not user-addressable. Sparing increases data availability without affecting performance. Symmetrix DMX systems support both dynamic and permanent sparing functionalities. See also ”dynamic sparing,” ”permanent member sparing,” and ”spare pool.”

spare pool Symmetrix systems have a disk sparing functionality that reserves drives as standby spares. The collection of spare drives is called the spare pool. See also ”drive sparing,” ”dynamic sparing,” and ”dynamic sparing.”

SQL Structure Query Language.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 401

Page 402: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

402

Glossary

SRDF group SRDF groups define relationships between Symmetrix systems. An SRDF group is a set of SRDF director port connections configured to communicate with another set of SRDF director ports in another Symmetrix system. Logical volumes (devices) are assigned to SRDF groups.

SRDF link One end-to-end SRDF connection between a given pair of Symmetrix systems.

SRDF/Asynchronous(SRDF/A)

A mode of remote replication that allows customers to asynchronously replicate data while maintaining a dependent write consistent copy of the data on the secondary (target, R2) device at all times. The dependent write consistent point-in-time copy of the data at the remote side is typically only seconds behind the primary (source, R1) side. SRDF/A session data is transferred to the secondary Symmetrix system in cycles (or delta sets), eliminating the redundancy of multiple same-track changes being transferred over the link, potentially reducing the required bandwidth.

SRDF/AutomatedReplication (SRDF/AR)

An automation solution that uses both SRDF and TimeFinder to provide a periodic asynchronous replication of a restartable data image. A single-hop SRDF/AR configuration is used to permit controlled data loss (depending on the cycle time). For protection over greater distances, a multihop SRDF/AR configuration can provide long distance disaster restart with zero data loss at a middle or "bunker" site.

SRDF/ConsistencyGroups (SRDF/CG)

An SRDF product offering designed to ensure the dependent-write consistency of data remotely mirrored by the SRDF operations in the event of a rolling disaster.

SRDF/Data Mobility(SRDF/DM)

An SRDF product offering that permits operation in SRDF adaptive copy mode only and is designed for data replication and/or migration between two or more Symmetrix systems. SRDF/DM transfers data from primary volumes to secondary volumes, permitting information to be shared, content to be distributed, and access to be local to additional processing environments. Adaptive copy mode enables applications using that volume to avoid propagation delays while data is transferred to the remote site. SRDF/DM supports all Symmetrix systems and all Enginuity levels that support SRDF, and can be used for local or remote transfers. See also ”adaptive copy mode.”

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 403: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

SRDF/Star An SRDF product offering that provides advanced multisite business continuity protection for mainframe and open systems environments. It enables concurrent SRDF/S with consistency groups and SRDF/A with MSC operations from the primary source volumes with the ability to incrementally establish an SRDF/A session between the two remote sites in the event of a primary site outage—a capability only available through SRDF/Star software.

SRDF/Synchronous(SRDF/S)

A business continuance solution that maintains a real-time (synchronous) copy of data at the logical volume level.

SSID The service set identifier for 3990 storage control emulations, this value identifies the physical components of a logical DASD subsystem. The SSID must be a unique number in the host system. It should be an even number and start on a zero boundary.

stage The process of writing data from a disk device to cache.

status-display console An MCS console that can receive displays of system status but from which an operator cannot enter commands.

storage administrator A person in the data processing center who is responsible for defining, implementing, and maintaining storage management policies.

storage area network A managed, high-speed network that enables any-to-any interconnection of heterogeneous servers and storage systems.

storage class A collection of storage attributes that identify performance goals and availability requirements, defined by the storage administrator, used to select a device that can meet those goals and requirements.

storage control unit The component in the Symmetrix system that connects Symmetrix to the host channels. It performs channel commands and communicates with the disk directors and cache. See also ”channel director.”

storage group A collection of storage volumes and attributes, defined by the storage administrator. The collections can be a group of DASD volume or tape volumes, or a group of DASD, optical, or tape volumes treated as single object storage hierarchy.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 403

Page 404: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

404

Glossary

storage management The activities of dataset allocation, placement, monitoring, migration, backup, recall, recovery, and deletion. These can be done either manually or by using automated processes. The Storage Management Subsystem automates these processes while optimizing storage resources. See also ”Storage Management Subsystem (SMS).”

Storage ManagementSubsystem (SMS)

A facility used to automate and centralize the management of storage. Using SMS, a storage administrator describes data allocation characteristics, performance and availability goals, backup and retention requirements, and storage requirements to the system through data class, storage class, management class, storage group, and ACS routine definitions.

storage subsystem A storage control and its attached storage devices.

structure A construct used by z/OS to map and manage storage on a Coupling Facility. See also ”cache structure,” ”list structure,” and ”lock structure.”

subagent A software component of SAN products that provides the actual remote query and control function, such as gathering host information and communicating with other components. This component is platform dependent. See also ”SAN agent.”

subsystem interface(SSI)

A component that provides communication between z/OS and its job entry subsystem.

subsystem masking The support provided by intelligent disk storage subsystems like the Enterprise Storage Server. See also ”LUN masking” and ”port zoning.”

supervisor callinstruction

An instruction that interrupts a program being executed and passes control to the supervisor so that it can perform a specific service indicated by the instruction.

support element A hardware unit that provides communications, monitoring, and diagnostic functions to a central processor complex (CPC).

suspended state A state occurring when only one of the devices in a dual copy or remote copy volume pair is being updated because of either a permanent error condition or an authorized user command. All writes to the remaining functional device are logged. This allows for automatic resynchronization of both volumes when the volume pair is reset to the active duplex state.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 405: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

SVC routine A control program routine that performs or begins a control program service specified by a supervisor call instruction.

SVC See also ”SAN volume controller (SVC).”

switch A component with multiple entry and exit points or ports that provide dynamic connection between any two of these points.

switch topology A switch allows multiple concurrent connections between nodes. There can be two types of switches: circuit switches and frame switches. Circuit switches establish a dedicated connection between two nodes. Frame switches route frames between nodes and establish the connection only when needed. A switch can handle all protocols.

Symmetrix RemoteData Facility (SRDF)

A family of replication software offering various levels of Symmetrix-based business continuance and disaster recovery solutions. The SRDF products offer the capability to maintain multiple, host-independent, mirrored copies of data. The Symmetrix systems can be in the same room, in different buildings within the same campus, or hundreds to thousands of kilometers apart.

symmetry The characteristic of a sysplex where all systems, or certain subsets of the systems, have the same hardware and software configurations and share the same resources.

synchronization An initial volume copy process. It produces a track image copy of each primary track on the volume on the secondary volume.

synchronousmessages

WTO or WTOR messages issued by a z/OS system during certain recovery situations.

synchronous mode Available with the SRDF/S product offering, synchronous mode maintains a real-time mirror image of data between the primary and secondary volumes. Data must be successfully stored in both the local and remote Symmetrix systems before an acknowledgement is sent to the primary site host.

synchronousoperation

A type of operation in which the remote copy PPRC function copies updates to the secondary volume of a PPRC pair at the same time that the primary volume is updated. Contrast with asynchronous operation. See also ”peer-to-peer remote copy (PPRC).”

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 405

Page 406: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

406

Glossary

sysplex A set of z/OS systems communicating and cooperating with each other through certain multisystem hardware components and software services to process customer workloads. See also ”Parallel Sysplex.”

sysplex coupledataset

A couple dataset that contains sysplex-wide data about systems, groups, and members that use XCF services. All z/OS systems in a sysplex must have connectivity to the sysplex couple dataset. See also ”couple dataset.”

sysplex-timer An IBM unit that synchronizes the time-of-day (TOD) clocks in multiple processors or processor sides.

system console In z/OS, a console attached to the processor controller used to initialize a z/OS system.

system controlelement (SCE)

Hardware that handles the transfer of data and control information associated with storage requests between the elements of the processor.

system managementfacilities (SMF)

An optional control program feature of z/OS that provides the means for gathering and recording information that can be used to evaluate system usage.

System ModificationProgram Extended

(SMP/E)

In addition to providing the services of SMP, SMP/E consolidates installation data, allows more flexibility in selecting changes to be installed, provides a dialog interface, and supports dynamic allocation of datasets.

system A z/OS image together with its associated hardware, which collectively are often referred to simply as a system, or z/OS system.

Systems NetworkArchitecture (SNA)

A description of the logical structure, formats, protocols, and operational sequences for transmitting information units through, and controlling the configuration and operation of networks.

T

TCP See also ”transmission control protocol.”

TCP/IP Transmission Control Protocol/Internet Protocol.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 407: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

temporary dataset A dataset that is created and deleted in the same job.

temporary link loss If SRDF/A suffers a temporary loss (<10 seconds by default) on all of the SRDF links, the SRDF/A state remains active and data continues to accumulate in global memory. This may result in an elongated cycle, but the secondary Symmetrix dependent-write consistency is not compromised and the primary and secondary Symmetrix device relationships are not suspended. The amount of time SRDF waits until it declares a link loss permanent is configurable.

terminal A device, usually equipped with a keyboard and some kind of display, capable of sending and receiving information over a link.

terminal user In systems with time-sharing, anyone who is eligible to log on.

tightly coupled Multiple CPs that share storage and are controlled by a single copy of z/OS. See also ”loosely coupled,” and “tightly coupled multiprocessor.”

tightly coupledmultiprocessor

Any CPU with multiple CPs.

Time Sharing Option(TSO)

The facility in z/OS that allows interactive time sharing from remote terminals.

timeout The time in seconds that the storage control remains in a “long busy” condition before physical sessions are ended.

tolerance mode Symmetrix SRDF/A tolerance mode allows certain conditions to occur that would normally drop SRDF/A. These conditions could include making the secondary volumes read/write. When tolerance mode is set to on, dependent-write consistency is NOT guaranteed.

topology An interconnection scheme that allows multiple Fibre Channel ports to communicate. For example, point-to-point, arbitrated loop, and switched fabric are all Fibre Channel topologies.

transaction A unit of work performed by one or more transaction programs, involving a specific set of input data and initiating a specific process or job.

transactionalconsistency

Transactional consistency is a DBMS state where all in-flight transactions are either completed or rolled back.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 407

Page 408: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

408

Glossary

transmission controlprotocol

A communications protocol used in the Internet and in any network that follows the Internet Engineering Task Force (IETF) standards for Internetwork protocol. TCP provides a reliable host-to-host protocol between hosts in packet-switched communications networks and in interconnected systems of such networks. It uses the Internet Protocol (IP) as the underlying protocol.

transmit idle A Reserve Capacity enhancement to SRDF/A feature that provides the capability of dynamically and transparently extending the Capture, Transmit, and Receive phases of the SRDF/A cycle while masking the effects of an “all SRDF links lost” event. Without the SRDF/A Transmit Idle enhancement, an “all SRDF links lost” event would normally result in the abnormal termination of SRDF/A. The SRDF/A Transmit Idle enhancement has been specifically designed to prevent this event from occurring.

Uunidirectional SRDF

linkIf all primary (source, R1) volumes reside in one Symmetrix system and all secondary (target, R2) volumes reside in another Symmetrix system, write operations move in one direction, from primary to secondary. This is a unidirectional configuration, in which data moves in the same direction between all devices in the SRDF group.

uniprocessor (UP) A CPC that contains one CP and is not partitionable.

unit address The hexadecimal value that uniquely defines a physical I/O device on a channel path in an MVS environment. See also ”device address.”

unit address The hexadecimal value that uniquely defines a physical I/O device on a channel path in an MVS environment. See also ”device address.”

VvDisk See also ”virtual disk.”

virtual disk An SVC device that appears to host systems attached to the SAN as an SCSI disk. Each vDisk is associated with exactly one I/O group.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 409: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

Virtual StorageAccess Method

(VSAM)

An access method for direct or sequential processing of fixed-length and varying-length records on direct access devices. The records in a VSAM dataset or file can be organized in logical sequence by a key field (key sequence), in the physical sequence in which they are written on the dataset or file (entry-sequence), or by relative-record number.

virtual storage (1) The storage space that can be regarded as addressable main storage by the user of a computer system in which virtual addresses are mapped into real addresses. The size of virtual storage is limited by the addressing scheme of the computer system and by the amount of auxiliary storage available, not by the actual number of main storage locations. (2) An addressing scheme that allows external disk storage to appear as main storage.

virtualtelecommunications

access method(VTAM)

A set of programs that maintain control of the communication between terminals and application programs running under z/OS.

volume A general term referring to a storage device. In the Symmetrix system, a volume corresponds to single disk device.

volume A general term referring to a storage device. In the Symmetrix system, a volume corresponds to single disk device.

volume serial number A number in a volume label that is assigned when a volume is prepared for use in the system.

volume table ofcontents (VTOC)

An area on a DASD volume that describes the location, size, and other characteristics of each dataset on the volume.

Wwait state Synonymous with waiting time.

waiting time (1) The condition of a task that depends on one or more events in order to enter the ready condition. (2) The condition of a processing unit when all operations are suspended.

WAN Wide area network.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 409

Page 410: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

410

Glossary

wave divisionmultiplexing

WDM allows the simultaneous transmission of a number of data streams over the same physical fiber cable, each using a different optical wavelength. WDM receives incoming optical signals from many sources (Fibre Channel, IP, ESCON, FICON) which it converts to electrical signals, then assigns them a specific wavelength (or lambdas) of light and retransmits them on that wavelength. This method relies on the large number of wavelengths available within the light spectrum. Coarse WDM (CWDM) and Dense WDM (DWDM) are based on the same methodology as WDM enabling more data streams over the same physical fiber.

WDM See also ”wave division multiplexing.”

world wide name A unique number assigned to Fibre Channel devices (including hosts and adapter ports). It is analogous to a MAC address on a network card.

wrap mode The console display mode that allows a separator line between old and new messages to move down a full screen as new messages are added. When the screen is filled and a new message is added, the separator line overlays the oldest message and the newest message appears immediately before the line.

write hit There is room in cache for the data presented by the write operation.

write miss There is no room in cache for the data presented by the write operation.

write-to-operator(WTO) message

A message sent to an operator console informing the operator of errors and system conditions that may need correcting.

write-to-operator-with-reply (WTOR)

message

A message sent to an operator console informing the operator of errors and system conditions that may need correcting. The operator must enter a response.

WWN See also ”world wide name.”

Zz/OS A widely-used operating system for the IBM zSeries mainframe

computers that use 64-bit real storage.

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 411: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Glossary

z/OS UNIX SystemServices (z/OS UNIX)

The set of functions provided by the SHELL and UTILITIES, kernel, debugger, file system, C/C++ Run-Time Library, Language Environment, and other elements of the z/OS operating system that allow users to write and run application programs that conform to UNIX standards.

zoning In Fibre Channel environments, zoning allows for finer segmentation of the switched fabric. Zoning can be used to instigate a barrier between different environments. Ports that are members of a zone can communicate with each other but are isolated from ports in other zones. Zoning can be implemented in two ways: hardware zoning and software zoning. See also ”hardware zoning” and ”software zoning.”

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 411

Page 412: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

412

Glossary

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 413: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Index

AAsynchronous replication 197

BBasic operations of SRDF/A

All links are lost 321BCV split 326Link Failure 308PEND_DROP 308Reestablish BCVs 334

CCascaded SRDF 74Cascaded SRDF/Star 175Consistency group

24Consistency Technologies

historical overview 32SRDF/A 44SRDF/A MSC 44SRDF/AR multi-hop 40SRDF/AR single hop 38SRDF/S consistency groups 32

DDelta Set Extension (DSE)

additional restrictions 242estimating RPO impact 235paging performance 228planning for SRDF/A 227pool performance 234save device 232

sizing configuration additions 230sizing example 237

Dependent-writeConsistency 23I/O 23

Disastercomplexity of recovery solution 27design considerations 26recovery 25restart 25

EEMC

AutoSwap 83highlights 84Use cases 85

foundation products 60Geographically Dispersed Disaster Restart

(GDDR) 75

GGatekeeper 257

IImplementation of SRDF/A 254

configuration overview 268design considerations 254DSE pool definition 287establishing Cascaded SRDF 296software requirements 254Symmetrix Control Facility (SCF) 262

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS 413

Page 414: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

414

Index

Symmetrix Control Facility (SCF) authoriziation codes 254

LLocality of reference 204

spatial locality 204SRDF link 206Symmetrix cache 205Temporal locality 204write folding 204

MMulti-Session Consistency 74

NNetwork considerations 224

distance on throughput 224response time 225

Normal paging 233

PPeak time 200

activity level and duration 200Performance comparisons

SRDF/S and SRDF/A 195Point of consistency 23

QQuality of Service (QoS) parameter 226

RRDF group 270Recovery

Point Objective (RPO) 26Time Objective (RTO) 26

Repaging 233Replication protection

synchronous versus asynchronous 193ResourcePak Base for z/OS 66

features 67Restore 24Return Home

overview 347

proceduresActivate secondary R2 volumes 338Restart 336

Rolling disaster 24

SSolutions Enabler 49split 36SRDF

modes 22solution comparisons 189

SRDF family of products for z/OS 70SRDF/A

analysis toolsSTP Navigator 245Workload Analyzer Performance Man-

ager 245balanced configurations 222balancing configurations 218cache calculation 216features and benefits 93history 95

Enginuity 5670 95Enginuity 5670.50 95Enginuity 5671 95Enginuity 5772 98Enginuity 5772.79.71 99Enginuity 5773 99Enginuity 5874 101Enginuity 5874 Q4’09 SR 104

implementation 188Link bandwidth 209planning and design service 248Reserve capacity enhancement

Delta Set Extension (DSE) 137Transmit Idle 129

solution parameters 198terms and concepts 23unbalanced configurations 221use in Cascaded SRDF configuration 167

SRDF/A Automated Recovery 81SRDF/A Multi-Session Consistency (MSC)

cleanup process 162mode 146

delta set switching 151dependent-write consistency 147

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 415: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

Index

SRDF/A single session mode 108cleanup process 126delta set switching 112logical states 110recovery scenarios 127state transitions 119

SRDF/AR (Automated Replication)functionality 191

SRDF/S (Synchronous)functionality 189

SRDF/Star 72Symmetrix DMX

storage platform operating environment 60data protection options 64

EMC Enginuity 61Mainframe 63

TTimeFinder

CG (plug in) 89Clone for z/OS 86family of products for z/OS 86Mirror for z/OS 88Snap for z/OS 87use to create restartable copy 164

Tolerance mode 106Transactional consistency 24

415EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS

Page 416: EMC SRDF/A and SRDF/A Multi-Session Consistency on z · PDF fileEMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS Version 1.5 • Planning an SRDF/A and SRDF/A Multi-Session

416

Index

EMC SRDF/A and SRDF/A Multi-Session Consistency on z/OS