emc backup and recovery for oracle 11g oltp 6: testing and validation ... emc backup and recovery...

80
EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

Upload: votuyen

Post on 16-Jun-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

EMC Backup and Recovery for Oracle 11g OLTP

Enabled by EMC CLARiiON,

EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager

using NFS

Proven Solution Guide

Copyright © 2010 EMC Corporation. All rights reserved. Published June, 2010 EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. Benchmark results are highly dependent upon workload, specific application requirements, and system design and implementation. Relative system performance will vary as a result of these and other factors. Therefore, this workload should not be used as a substitute for a specific customer application benchmark when critical capacity planning and/or product evaluation decisions are contemplated. All performance data contained in this report was obtained in a rigorously controlled environment. Results obtained in other operating environments may vary significantly. EMC Corporation does not warrant or represent that a user can or will achieve similar performance expressed in transactions per minute. No warranty of system performance or price/performance is expressed or implied in this document. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com. All other trademarks used herein are the property of their respective owners. Part number: H7207

Table of Contents

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

3

Table of Contents Chapter 1: About this Document ........................................................................................... 4 

Overview ............................................................................................................................ 4 Audience and purpose ....................................................................................................... 5 Business challenge ............................................................................................................ 6 Technology solution ........................................................................................................... 6 Objectives .......................................................................................................................... 7 Reference Architecture ...................................................................................................... 8 Validated environment profile ............................................................................................. 9 Hardware and software resources ..................................................................................... 9 Prerequisites and supporting documentation ................................................................... 11 Terminology ..................................................................................................................... 12 

Chapter 2: Use Case Components ...................................................................................... 13 Chapter 3: Storage Design ................................................................................................... 17 

Overview .......................................................................................................................... 17 CLARiiON storage design and configuration ................................................................... 18 Data Domain .................................................................................................................... 23 SAN topology ................................................................................................................... 25 

Chapter 4: Oracle Database Design .................................................................................... 28 Overview .......................................................................................................................... 28 

Chapter 5: Installation and Configuration ........................................................................... 33 Overview .......................................................................................................................... 33 Navisphere ....................................................................................................................... 34 PowerPath ........................................................................................................................ 37 Install Oracle Clusterware ................................................................................................ 42 Data Domain .................................................................................................................... 47 NetWorker ........................................................................................................................ 57 Multiplexing ...................................................................................................................... 62 

Chapter 6: Testing and Validation ....................................................................................... 63 Overview .......................................................................................................................... 63 Section A: Test results summary and resulting recommendations .................................. 64 

Chapter 7: Conclusion .......................................................................................................... 76 Overview .......................................................................................................................... 76 

Appendix A: Scripts .............................................................................................................. 78 

Chapter 1: About this Document

Overview

Introduction to solution

This Proven Solution Guide summarizes a series of best practices that were discovered, validated, or otherwise encountered during the validation of the EMC Data Domain® Backup and Recovery for an Oracle 11g OLTP environment enabled by EMC® CLARiiON®, EMC Data Domain, EMC NetWorker®, and Oracle Recovery Manager. EMC's commitment to consistently maintain and improve quality is led by the Total Customer Experience (TCE) program, which is driven by Six Sigma methodologies. As a result, EMC has built Customer Integration Labs in its Global Solutions Centers to reflect real-world deployments in which TCE use cases are developed and executed. These use cases provide EMC with an insight into the challenges currently facing its customers.

Use case definition

A use case reflects a defined set of tests that validates the reference architecture for a customer environment. This validated architecture can then be used as a reference point for a Proven Solution.

Contents The content of this chapter includes the following topics.

Topic See Page

Audience and purpose 5

Business challenge 6

Technology solution 6

Objectives 7

Reference Architecture 8

Validated environment profile 9

Hardware and software resources 9

Prerequisites and supporting documentation 11

Terminology 12

Chapter 1: About this Document

Audience and purpose

Audience The intended audience for the Proven Solution Guide is:

• internal EMC personnel • EMC partners • customers

Purpose The purpose of this proven solution for deduplication is to define a working

infrastructure for an Oracle RAC environment with an Oracle 1 TB OLTP database on CLARiiON storage infrastructure using a Data Domain appliance to:

• Demonstrate the dramatic reduction in the amount of disk storage needed to retain and protect enterprise data enabled by Data Domain

• Determine the reduction of backup impact by offloading the backup to a proxy mount host enabled by SnapView™ clones and NetWorker

• Validate the improvement in Recovery Time Objective (RTO) when the backup schedule utilizes full backups only

This document provides a specification for the customer environment (storage configurations, design, sizing, software and hardware, and so on) that constitutes an enterprise Oracle 11g RAC backup and recovery solution in an Oracle OLTP environment, deployed on the EMC CLARiiON CX4-960. In addition, this use case provides information on:

• Building an enterprise Oracle 11g RAC environment on an EMC CLARiiON CX4-960.

• Identifying the steps required to design and implement an enterprise-level Oracle 11g RAC solution around EMC software and hardware.

• Deploying a Data Domain DD880 appliance.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

5

Chapter 1: About this Document

Business challenge

Overview Today's IT is being challenged by the business to solve the following pain points

around the backup and recovery of the business’ critical data:

• Protect the business information as an asset of the business’ defined recovery point objective (RPO - amount of data to recover) and recovery time objective (RTO - time to recover)

• Use of both infrastructure and people to support the business efficiently • Back up large enterprise-critical, multi-terabyte systems

Exponential data growth, changing regulatory requirements, and increasingly complex IT infrastructure all have a major impact on data managers’ data protection schemes. RTO continues to decrease while the precision of the RPO increases. In other words, IT managers must be able to recover from a given failure quicker than ever and with less data loss. It is not uncommon for organizations to routinely exceed their backup window, or even have a backup window that takes up most of the day. Such long backup operations leave little margin for error and any disruption can place some of the data at risk of loss. Such operations also mean that a guaranteed RPO cannot be met. Because of the demands generated by data growth and the RTO/RPO requirements in Oracle database environments, it is critical that robust, reliable, and tested backup and recovery processes are in place. Backup and recovery of Oracle databases are a vital part of IT data protection strategies. To meet these backup and recovery challenges, enterprises need proven solution architectures that encompass the best of what EMC and Oracle can offer.

Technology solution

Overview This solution describes a backup and recovery environment of an Oracle 11g OLTP

database. The database was deployed on a CLARiiON CX4-960 and demonstrates the ease and power of integrating EMC storage with Oracle Automatic Storage Management (ASM). This was tested in a two-node Oracle RAC configuration. Backup and recovery were implemented using Oracle RMAN, SnapView clones, and EMC NetWorker. The backup was deployed over NFS to an EMC Data Domain DD880 deduplication appliance. The backup process was off-loaded to an EMC NetWorker proxy host using Navisphere® SnapView clones. The replica clone copy of the database was mounted to the proxy node and backups were then executed on the proxy node. This is also referred to as the “clone mount host.”

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

6

Chapter 1: About this Document

The following table describes the key components and their configuration details within this environment.

Component Description Configuration Software

Storage array CLARiiON CX4-960

Four BE 4 Gb FC ports, eight FE 4 Gb FC ports per storage processor, nine DAEs with five 146 GB and 130 x 300 GB disk drives

FLARE® 04.29.000.5.003

Deduplication appliance

Data Domain DD880

Two 10 GbE optical NICs, two SAS HBAs - disk connectivity, three ES20 disk shelves with 48 disks

DDOS 4.7.1.3

Database Oracle 11g OLTP database system

1 TB Oracle 11g OLTP database on a two-node RAC using ASM

Oracle 11g Database/Cluster/ ASM versions 11.1.0.7

Backup manager

EMC NetWorker

NetWorker server, dedicated storage node and clients

NetWorker 7.6 NetWorker Module for Oracle (NMO) 5.0

Objectives

This document provides guidelines on how to configure and set up an Oracle 11g

OLTP database with Data Domain deduplication storage systems. The solution demonstrates the benefits of deduplication in an Oracle backup environment. The backup schedule used all level 0 (full backups). Level 1 (incremental backups) was not used in the schedule because when target deduplication is deployed, only unique, new data is written to disk. Therefore, the deduplicated backup image does not carry the restore penalty associated with incremental backups because the entire backup image is still always available. This document is not intended to be a comprehensive guide to every aspect of an Enterprise Oracle 11g solution. This document describes how to perform the following functions:

• Install and build of the infrastructure • Configure and test CLARiiON storage • Configure the Oracle 11g environment • Configure a Data Domain virtual tape library • Configure NetWorker

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

7

Chapter 1: About this Document

Reference Architecture

Corresponding Reference Architecture

This use case has a corresponding Reference Architecture document that is available on Powerlink® and EMC.com. Refer to EMC Backup and Recovery for Oracle Database 11g—OLTP enabled by EMC CLARiiON, EMC Data Domain, and EMC NetWorker using NFS Reference Architecture for details. If you do not have access to the following content, contact your EMC representative.

Reference Architecture diagram

The following diagram depicts the overall physical architecture of the use case.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

8

Chapter 1: About this Document

Validated environment profile

Profile characteristics

The use case was validated with the following environment profile.

Profile characteristic Value

Database characteristic OLTP Benchmark profile Swingbench OrderEntry - TPC-C-like

benchmark Response time < 10 ms Read / Write ratio 70 / 30 Database scale A Swingbench load that keeps the system

running within agreed performance limits Size of databases 1 TB Number of databases 1 Array drives: size and speed 300 GB; 15k rpm

Hardware and software resources

Hardware The hardware used to validate the use case is listed below.

Equipment Quantity Configuration

Storage array 1 CLARiiON CX4-960: • Nine DAEs • 5 x 146 GB FC drives • 126 x 300 GB FC drives • 4 x 300 GB hot spares

SAN 2 4 Gb capable FC switch, 64 port

Deduplication appliance 1 Data Domain DD880 Two 10 GbE optical NICs, two SAS HBAs - disk connectivity, three ES20 disk shelves with 48 disks

Oracle database node 2 Four Quad-Core Xeon E7330 processors, 2.4 GHz, 6 MB, 1066 FSB, 32 GB RAM. Two 73 GB 10k internal disks. Two dual-port 4 Gb Emulex LP11002E HBAs.

Proxy node (mount host) 1 Four Quad-Core Xeon E7330 processors, 2.4 GHz, 6 MB, 1066 FSB, 32 GB RAM. Two 73 GB 10k internal disks. Two dual-port 4 Gb Emulex LP11002E HBAs. Two 10 Gigabit XF SR Server Adapter

Navisphere management server NetWorker server

1 Two Quad-Core processors, 1.86 GHz, 16 GB RAM. Two 4 Gb Emulex LP11002E HBAs.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

9

Chapter 1: About this Document

Network (backup) 1 Brocade TurboIron 24

Network (management) 2 Cisco Catalyst 3750G

Software The software used to validate the use case is listed below.

Software Version Comment RedHat Linux 5.3 OS for database nodes Microsoft Windows 2003 SP2 OS for Navisphere

Management Server Oracle Database/Cluster/ASM 11g Release 1 (11.1.0.7.0) Database/cluster

software/volume management Oracle ASMLib 2.0 Support library for ASM Swingbench 2.3 OLTP database benchmark Orion 10.2 Orion is the Oracle I/O

Numbers Calibration Tool designed to simulate Oracle I/O workloads

FLARE operating environment 04.29.000.5.003 Navisphere Management Suite Includes:

• Access Logix™ • Navisphere Agent

Navisphere Analyzer 6.29.0.6.34 Analyzer Enabler SnapView 6.29.0.6.34.1 SnapView Enabler

PowerPath® 5.3 Multipathing software DDOS 4.7.1.3 Data Domain OS NetWorker 7.6 Backup and recover suite NetWorker Module for Oracle 5.0 NetWorker Oracle integration Brocade TurboIron software 04.1.00c 10 GbE network Cisco IOS 12.2 Network Fabric OS 6.2.0g SAN

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

10

Chapter 1: About this Document

Prerequisites and supporting documentation

Technology It is assumed the reader has a general knowledge of:

• EMC CLARiiON • EMC Data Domain • EMC NetWorker • Oracle Database • Red Hat Linux

Supporting documents

The following documents, located on Powerlink.com, provide additional, relevant information. Access to these documents is based on your login credentials. If you do not have access to the following content, contact your EMC representative.

• EMC CLARiiON CX4-960 Setup Guide • EMC Navisphere Manager Help (html) • EMC PowerPath Product Guide • EMC CLARiiON Database Storage Solution: Oracle 10g/11g with CLARiiON

Storage Replication Consistency • EMC CLARiiON Server Support Products for Linux Servers Installation

Guide • EMC Support Matrix • Data Domain OS Initial Configuration Guide • Data Domain OS Administration Guide • NetWorker Installation Guide • NetWorker Administration Guide • NetWorker Module for Oracle Administration Guide • NetWorker Module for Oracle Installation Guide • EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC

CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using Fibre Channel Proven Solution Guide

Third-party documents

The following documents are available on third-party websites.

• Oracle Database Installation Guide 11g Release 1 (11.1) for Linux • Oracle Real Application Clusters Installation Guide 11g Release 1 (11.1) for

Linux • Oracle Clusterware Installation Guide 11g Release 1 (11.1) for Linux • Oracle Database Backup and Recovery User's Guide • Orion: Oracle I/O Numbers Calibration Tool • Why Are Datafiles Being Written To During Hot Backup?

(Doc ID: 1050932.6) • What Happens When A Tablespace/Database Is Kept In Begin Backup

Mode (Doc ID: 469950.1)

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

11

Chapter 1: About this Document

Terminology

Terms and definitions

This section defines the terms used in this document.

Term Definition

ASM Automatic Storage Management

BE Back End

DAE Disk Array Enclosure

DBCA Database Configuration Assistant

FE Front End

NFS Network file System

NMO NetWorker Module for Oracle

RAC Real Application Cluster

RPO Recovery Point Objective

RTO Recovery Time Objective

SAS Serial Attached SCSI

SISL Stream-Informed Segment Layout

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

12

Chapter 2: Use Case Components

Chapter 2: Use Case Components

Introduction This section briefly describes the key solutions components. For details on all of the

components that make up the solution architecture, refer to the “Hardware” and “Software” sections.

CLARiiON CX4-960

The EMC CLARiiON CX4 model 960 enables you to handle the most data-intensive workloads and large consolidation projects. CLARiiON CX4-960 delivers innovative technologies such as Flash drives, Virtual Provisioning™, a 64-bit operating system, and multi-core processors. The CX4 new flexible I/O module design, UltraFlex™ technology, delivers an easily customizable storage system. Additional connection ports can be added to expand connection paths from servers to the CLARiiON. The CX4-960 can be populated with up to six I/O modules per storage processor. CLARiiON CX4 is designed to work with Oracle ASM to give DBAs the most comprehensive protection for their Oracle database environment, while maintaining the ease-of-use elements offered by ASM.

EMC Data Domain DD880

EMC Data Domain deduplication storage systems dramatically reduce the amount of disk storage needed to retain and protect enterprise data, including Oracle databases. By identifying redundant data as it is being stored, Data Domain systems provide a storage footprint that is five to 30 times smaller, on average, than the original dataset. Backup data can then be efficiently replicated and retrieved over existing networks for streamlined disaster recovery and consolidated tape operations. This allows Data Domain appliances to integrate seamlessly into Oracle architectures, maintaining existing backup strategies such as Oracle RMAN with no changes to scripts, backup processes, or system architecture. The Data Domain DD880 is the industry’s highest throughput, most cost-effective and scalable deduplication storage solution for disk backup and network-based disaster recovery (DR). The high-throughput inline deduplication data rate of the DD880 is enabled by the Data Domain Stream-Informed Segment Layout (SISL) scaling architecture. The level of throughput is achieved by a CPU-centric approach to deduplication, which minimizes the number of disk spindles required.

Brocade TurboIron 24X switch

The Brocade TurboIron 24X switch is a compact, high-performance, high-availability, and high-density 10/1 GbE dual-speed solution. It meets mission-critical data center top-of-rack and High-Performance Cluster Computing (HPCC) requirements. An ultra-low-latency, cut-through, non-blocking architecture and low power consumption help provide a cost-effective solution for server or compute-node connectivity.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

13

Chapter 2: Use Case Components

Navisphere Management Suite

The Navisphere Management Suite of integrated software tools allows you to manage, discover, monitor, and configure EMC CLARiiON systems as well as control all platform replication applications from a simple, secure, web-based management console. Navisphere Management Suite enables you to access and manage all CLARiiON advanced software functionality—including EMC Navisphere Quality of Service Manager, Navisphere Analyzer, SnapView, SAN Copy™, and MirrorView™. When used with other EMC storage management software, you gain storage resource, SAN, and replication management functionality—for greater efficiency and control over CLARiiON storage infrastructure.

EMC PowerPath

EMC PowerPath is a server-resident software that enhances performance and application availability. PowerPath works with the storage system to intelligently manage I/O paths, and supports multiple paths to a logical device. In this solution, PowerPath manages I/O paths and provides: • Automatic failover in the event of a hardware failure. PowerPath automatically

detects path failure and redirects I/O to another path. • Dynamic multipath load balancing. PowerPath intelligently distributes I/O requests

to a logical device across all available paths, thus improving I/O performance and reducing management time and downtime by eliminating the need to configure paths statically across logical devices.

PowerPath enables customers to standardize on a single multipathing solution across their entire environment.

EMC NetWorker EMC NetWorker software comprises a high-capacity, easy-to-use data storage

management solution that protects and helps to manage data across an entire network. NetWorker simplifies the storage management process and reduces the administrative burden by automating and centralizing data storage operations.

NetWorker Module for Oracle (NMO) NMO provides the capability to integrate database and file system backups, to relieve the burden of backup from the database administrator while allowing the administrator to retain control of the restore process. NMO includes the following features:

• Automatic database storage management through automated scheduling, autochanger support, electronic tape labeling, and tracking.

• Support for backup to a centralized backup server. • High performance through support for multiple, concurrent high-speed

devices, such as digital linear tape (DLT) drives. EMC NetWorker, together with the NetWorker Module for Oracle, provides tight integration with Oracle RMAN and seamlessly uses a Data Domain deduplication appliance as an NFS target for RMAN backups. These elements create a fast, efficient, and nondisruptive backup that offloads the backup burden from the production RAC environment.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

14

Chapter 2: Use Case Components

EMC SnapView SnapView is a storage-system-based software application that allows you to create a

copy of a LUN by using either clones or snapshots. A clone is an actual copy of a LUN and takes time to create, depending on the size of the source LUN. A snapshot is a virtual point-in-time copy of a LUN and takes only seconds to create. SnapView has the following important benefits: • Allows full access to a point-in-time copy of your production data with modest

impact on performance and without modifying the actual production data. • For decision support or revision testing, provides a coherent, readable and

writeable copy of real production data. • For backup, practically eliminates the time that production data spends offline or in

hot backup mode, and it offloads the backup overhead from the production server to another server.

• Provides a consistent replica across a set of LUNs. You can do this by performing a consistent fracture, which is a fracture of more than one clone at the same time, or a fracture that you create when starting a session in consistent mode.

• Provides instantaneous data recovery if the source LUN becomes corrupt. You can perform a recovery operation on a clone by initiating a reverse synchronization and on a snapshot session by initiating a rollback operation.

Oracle Database 11g Enterprise Edition

Oracle Database 11g Enterprise Edition delivers industry-leading performance, scalability, security, and reliability on a choice of clustered or single servers running Windows, Linux, and UNIX. It provides comprehensive features to easily manage the most demanding transaction processing, business intelligence, and content management applications. Oracle Database 11g Enterprise Edition comes with a wide range of options to help grow your business and meet users' performance, security, and availability service level expectations.

Oracle Database 11g RAC Oracle Real Application Clusters (RAC) is an optional feature of Oracle Database 11g Enterprise Edition. Oracle RAC supports the transparent deployment of a single database across a cluster of servers, providing fault tolerance from hardware failures or planned outages. If a node in the cluster fails, Oracle continues running on the remaining nodes. If more processing power is needed, you can add new nodes to the cluster to provide horizontal scaling. Oracle RAC supports mainstream business applications of all kinds, including Online Transaction Processing (OLTP) and Decision Support System (DSS).

Oracle ASM Oracle Automatic Storage Management (ASM) is an integrated database filesystem and disk manager that reduces the complexity of managing the storage for the database. The ASM filesystem and volume management capabilities are built into the Oracle database kernel. In addition to providing performance and reliability benefits, ASM can also increase database availability because disks can be added or removed without shutting down the database. ASM automatically rebalances the database files across an ASM diskgroup after disks have been added or removed.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

15

Chapter 2: Use Case Components

Oracle ASMLib ASMLib is a support library for the ASM feature of Oracle Database. It is an add-on module that simplifies the management and discovery of ASM disks. The ASMLib provides an alternative to the standard operating system interface for ASM to identify and access block devices. ASMLib is composed of the actual ASMLib library, which is loaded by Oracle at Oracle startup, and a kernel driver that is loaded into the OS kernel at system boot. The kernel driver version is specific to the OS kernel.

ASMCMD Oracle database administrators can use the asmcmd utility to query and manage their ASM systems. ASM-related information can be retrieved easily for diagnosing and debugging purposes.

Oracle Recovery Manager Oracle Recovery Manager (RMAN) is a command-line and Enterprise Manager-based tool for backing up and recovering an Oracle database. It provides block-level corruption detection during backup and restore. RMAN optimizes performance and space consumption during backup with file multiplexing and backup set compression, and integrates with Oracle Secure Backup and third-party media management products for tape backup.

Swingbench Swingbench is a publicly available load generator (and benchmark tool) designed to

stress test Oracle databases. Swingbench consists of a load generator, a coordinator, and a cluster overview. The software enables a load to be generated and the transactions/response times to be charted. Swingbench is provided with four benchmarks: • OrderEntry – TPC-C-like workload. • Calling Circle – Telco-based self-service workload. • Stress Test – Performs simple insert/update/delete/select operations. • DSS – A DSS workload, based on the Oracle Sales History schema. The Swingbench workload used in this testing was Order Entry. The Order Entry (PL/SQL) workload models the classic order entry stress test. It has a profile similar to the TPC-C benchmark. It models an online order entry system, with users being required to log in before purchasing goods.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

16

Chapter 3: Storage Design

Chapter 3: Storage Design

Overview

Introduction to storage design

The environment consisted of a two-node Oracle 11g RAC cluster that accessed a single production database. Each cluster node resided on its own server, which is a typical Oracle RAC configuration. The two RAC nodes communicated with each other through a dedicated private network that includes a Cisco Catalyst 3750G-48TS switch. This cluster interconnection synchronized cache across various database instances between user requests. The 10 GbE backup network was created using a Brocade TurboIron 24 switch. A Fibre Channel SAN was provided by two Brocade 4900 switches. EMC PowerPath was used in this solution and works with the storage system to intelligently manage I/O paths. In this solution, for each server, PowerPath managed four active I/O paths to each device and four passive I/O paths to each device.

Contents This chapter contains the following topics:

Topic See Page

CLARiiON storage design and configuration 18

Data Domain 23

SAN topology 25

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

17

Chapter 3: Storage Design

CLARiiON storage design and configuration

Design CLARiiON CX4-960 uses UltraFlex technology to provide array connectivity. This

approach is extremely flexible and allows each CX4 to be tailored to each user’s specific needs. In the CX4 deployed for this use case, each storage processor was populated with four back-end buses to provide 4 Gb connectivity to the DAEs and disk drives. Each storage processor had eight 4 Gb front-end Fibre Channel ports for SAN connectivity. There were also two iSCSI ports on each storage processor that were not used. Nine DAEs were populated with 130 x 300 GB 15k drives, and five 146 GB drives were also used for the vault. The CLARiiON was configured to house a 1 TB production database and two clone copies of that database. The clone copies were utilized as follows:

• Gold copy • Backup copy

Gold copy At various logical checkpoints within the testing process the gold copy was refreshed to ensure there was an up-to-date copy of the database available at all times. This ensured that an instantaneous recovery image was always available in the event that any logical corruption occurred during, or as result of, the testing process. If any issue did occur, a reverse synchronization from the SnapView clone gold copy would have made the data available immediately, thereby avoiding having to rebuild the database.

Backup copy The backup clone copy was used for NetWorker proxy backups. The clone copy of the database was mounted to the proxy node and the backups were executed on the proxy node. This is also referred to as the “clone mount host.”

Configuration It is a best practice to use ASM external redundancy for data protection when using

EMC arrays. CLARiiON will also provide protection against loss of media, as well as transparent failover in the event of a specific disk or component failure. The following image shows the CLARiiON layout; the CX4-960 deployed for this solution had four 4 Gb Fibre Channel back-end buses for disk connectivity. The back-end buses are numbered Bus 0 to Bus 3. Each bus was connected to a number of DAEs (disk array enclosures). DAEs are numbered using the “Bus X Enc Y” nomenclature, so the first enclosure on Bus 0 is therefore known as Bus 0 Enc 0. Each bus had connectivity to both storage processors for failover purposes. Each enclosure can hold up to 15 disk drives. Each disk drive is numbered in an extension of the Bus Enclosure scheme. The first disk in Bus 0 Enclosure 0 is known as disk 0_0_0.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

18

Chapter 3: Storage Design

The following image shows how ASM diskgroups were positioned on the CLARiiON array.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

The first enclosure contained the vault area. The first five drives 0_0_0 through

0_0_4 have a portion of the drives reserved for internal use. This reserved area contained the storage processor boot images as well as the cache vault area. Disks 0_0_11 to 0_0_14 were configured as hot spares. Disks 0_0_5 to 0_0_9 were configured as RAID Group 0 with 16 LUNs used for the redo logs. These LUNs were then allocated as an ASM diskgroup, named the redo diskgroup. RAID Group 0 also contained the OCR disk and the Voting disk. The next four enclosures contained three additional ASM diskgroups. The following section explores this in more detail.

and Oracle Recovery Manager using NFS Proven Solution Guide

19

Chapter 3: Storage Design

ASM diskgroups The database was built using four distinct ASM diskgroups:

• The Data diskgroup contained all datafiles and the first control file. • The Online Redo diskgroup contained online redo logs for the database and

a second control file. Ordinarily, Oracle’s best practice recommendation is for the redo logs files to be placed in the same diskgroup as all the database files (the Data diskgroup in this example). However, it is necessary to separate the online redo logs from the data diskgroup when planning to do recovery from split mirror snap copies since the current redo log files cannot be used to recover the cloned database.

• The Flash Recovery diskgroup contained the archive logs. • The Temp diskgroup contained tempfiles.

ASM data area MetaLUNs were chosen for ease of management and future scalability. As the data grows, and consequently the number of ASM disks increases, ASM will have an inherent overhead managing a large number of disks. Therefore, metaLUNs were selected to allow the CLARiiON to manage request queues for large number of LUNs. For the Data diskgroup, four striped metaLUNs was created, each containing four members. The selection of members for each metaLUN was chosen to ensure that each member resided on a different back-end bus to ensure maximum throughput. The starting LUN for each metaLUN were also carefully selected to avoid all the metaLUNs starting on the same RAID group. This selection criterion was to avoid starting all the ASM disks on the same set of spindles, and alternating the metaLUN members to balance the LUN residence. This methodology was used to ensure that ASM parallel chunk IOs would not hit the same spindles at the same time within the metaLUNs when, or if, Oracle performed a parallel table scan.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

20

Chapter 3: Storage Design

EMC SnapView SnapView clones were used to create complete copies of the database. A clone copy was used to offload the backup operations from the production nodes to the proxy node. A second clone copy was used as a gold copy. The following graphic shows an example of a clone LUN's relationship to the source LUN, in this example the clone information for one of the LUNs is contained in the ASM datagroup. SnapView clones create a full bit-for-bit copy of the respective source LUN. A clone was created for each of the LUNs contained within the ASM diskgroups, and all clones were then simultaneously split from their respective sources to provide a point-in-time content consistent replica set. The command naviseccli – h arrayIP snapview –listclonegroup –data1 was used to display information on this clone group. Each of the ASM diskgroup LUNs was added to a clone group becoming the clone source device. Target LUN clones were then added to the clone group. Each clone group is assigned a unique ID and each clone gets a unique clone ID within the group. The first clone added has a clone ID of 010000000000000, and for each subsequent clone added the clone ID increments. The clone ID is then used to specify which clone is selected each time a cloning operation is performed.

As shown above there are two clones assigned to the clone group. Clone ID

01000000000000000 was used as the gold copy and clone ID 0200000000000000 was used for backups. (The Navisphere Manager GUI also shows this information.) When the clones are synchronized they can be split (fractured) from the source LUN to provide an independent point-in-time copy of the database.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

21

Chapter 3: Storage Design

The LUNs used for the clone copies were configured in a similar fashion to the source copy to maintain the required throughput during the backup process. The image below shows the clone relationship for two of the metaLUNs.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

22

Chapter 3: Storage Design

Data Domain

Overview The following sections describe how Data Domain systems ensure data integrity and

provide multiple levels of data compression, reliable restorations and multipath configurations. The Data Domain operating system (DD OS) Data Invulnerability Architecture™ protects against data loss from hardware and software failures.

Data integrity When writing to disk, the DD OS creates and stores checksums and self-describing

metadata for all data received. After writing the data to disk, the DD OS then recomputes and verifies the checksums and metadata. An append-only write policy guards against overwriting valid data. After a backup completes, a validation process looks at what was written to disk to see that all file segments are logically correct within the file system and that the data is the same on the disk as it was before being written to disk. In the background, the Online Verify operation continuously checks that data on the disks is correct and unchanged since the earlier validation process. The back-end storage is set up in a double parity RAID 6 configuration (two parity drives). Additionally, hot spares are configured within the system. Each parity stripe has block checksums to ensure that data is correct. The checksums are constantly used during the online verify operation and when data is read from the Data Domain system. With double parity, the system can fix simultaneous errors on up to two disks. To keep data synchronized during a hardware or power failure, the Data Domain system uses NVRAM (non-volatile RAM) to track outstanding I/O operations. An NVRAM card with fully-charged batteries (the typical state) can retain data for a minimum of 48 hours. When reading data back on a restore operation, the DD OS uses multiple layers of consistency checks to verify that restored data is correct.

Data compression

DD OS stores only unique data. Through Global Compression, a Data Domain system pools redundant data from each backup image. Any duplicate data is stored only once. The storage of unique data is invisible to backup software, which sees the entire virtual file system. DD OS data compression is independent of data format. This can be structured, for example, databases, or unstructured, for example, text files. Data can be from file systems or raw volumes. Typical compression ratios are 20:1 on average over many weeks. This assumes weekly full and daily incremental backups. A backup that includes many duplicate or similar files (files copied several times with minor changes) benefits the most from compression. Depending on backup volume, size, retention period, and rate of change, the amount of compression can vary. Data Domain performs inline deduplication only. Inline deduplication ensures:

• Smaller footprint • Longer retention • Faster restore • Faster time to disaster recovery

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

23

Chapter 3: Storage Design

SISL Stream-Informed Segment Layout (SISL) enables inline deduplication. SISL

identifies 99 percent of duplicate segments in RAM and ensures that all related segments are stored in close proximity on disk for optimal reads.

Multipath and load-balancing configuration

Data Domain systems that have at least two 10 GbE ports support multipath configuration and load balancing. In a multipath configuration on a Data Domain system, each of two 10 GbE ports on the system is connected to a separate port on the backup server.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

24

Chapter 3: Storage Design

SAN topology

SAN topology Oracle layout

The two-node Oracle 11g RAC cluster nodes and the proxy node were cabled and zoned as shown in the following image. Each node contained two dual-port HBAs. The four ports were used to connect the nodes to the CX4-960. CLARiiON best practice dictates that single initiator soft zoning is used. Each HBA is zoned to both storage processors. This configuration offers the highest level of protection and may also offer higher performance. It entails the use of full-feature PowerPath software. In this configuration, there are multiple HBAs connected to the host; therefore, there are redundant paths to each storage processor. There is no single point of failure. Data availability is ensured in event of an HBA, cable, switch, or storage processor failure. Since there are multiple paths per storage processor, this configuration benefits from the PowerPath load-balancing feature and thus provides additional performance. The connectivity diagram below shows the two-node Oracle 11g RAC cluster nodes.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

25

Chapter 3: Storage Design

NetWorker topology

The EMC NetWorker environment provides the ability to protect your enterprise against the loss of valuable data. In a network environment, where the amount of data grows rapidly, the need to protect data becomes crucial. The EMC NetWorker product gives you the power and flexibility to meet such a challenge.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

A Data Domain system integrates into a NetWorker environment as the storage

destination for directed backups. In this solution, the Data Domain system was configured as a number of NFS shares. The NFS shares were configured as advanced file type devices (adv_file). This takes advantage of the speed of disk and easily integrates with a previously configured NetWorker environment.

and Oracle Recovery Manager using NFS Proven Solution Guide

26

Chapter 3: Storage Design

10 GbE network topology

The 10 GbE backup network was enabled using a Brocade TurboIron 24 switch. The TurboIron is a compact, high-performance, high-availability, and high-density 10/1 GbE dual-speed solution. Variable length subnet masking was used to ensure that both paths to the Data Domain appliance were used to transport data during the backup phase.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

27

Chapter 4: Oracle Database Design

Chapter 4: Oracle Database Design

Overview

Introduction to Oracle database design

This chapter provides guidelines on the Oracle database design used for this validated solution. The design and configuration instructions apply to the specific revision levels of components used during the development of the solution. Before attempting to implement any real-world solution based on this validated scenario, gather the appropriate configuration documentation for the revision levels of the hardware and software components. Version-specific release notes are especially important.

ASM diskgroups

The database was built with four distinct ASM diskgroups (+DATA, +FRA, +REDO, and +TEMP).

ASM Diskgroup Contents

DATA Data and index tablespaces, controlfile

FRA Archive logs

REDO Online redo log files, controlfile

TEMP Temporary tablespace The ASMCMD CLI lists the diskgroups, showing the state of each one.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

28

Chapter 4: Oracle Database Design

Control files The Oracle database, in this solution, has two control files, each stored in different

ASM diskgroups.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

Redo logs All database changes are written to the redo logs (unless logging is explicitly turned

off) and are therefore very write-intensive. To protect against a failure involving the redo log, the Oracle database was created with multiplexed redo logs so that copies of the redo log can be maintained on different disks. Archive log mode was enabled, which automatically created database-generated offline archived copies of online redo log files. Archive log mode enables online backups and media recovery. Note Oracle recommends that archive logging is enabled.

and Oracle Recovery Manager using NFS Proven Solution Guide

29

Chapter 4: Oracle Database Design

The previous graphic shows that once archive log mode is enabled, the archive logs

were written out to the FRA diskgroup.

Parameter files A centrally located server parameter file (spfile) stored and managed the database

initialization parameters persistently by all RAC instances. Oracle recommends that you create a server parameter file as a dynamic means of maintaining initialization parameters.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

30

Chapter 4: Oracle Database Design

Swingbench and Datagenerator

Datagenerator is a utility used to populate, create, and load tables with semi-random data. This was used to generate the 1 TB schema. The following image shows the Swingbench Order Entry schema.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

and Oracle Recovery Manager using NFS Proven Solution Guide

31

Chapter 4: Oracle Database Design

The Swingbench Configuration, User Details, and Load tabs (see the following image) enable you to change all of the important attributes that control the size and type of load placed on your server. Four of the most useful are: • Number of Users: This describes the number of sessions that Swingbench will

create against the database. • Min and Max Delay Between Transactions (ms): These values control how long

Swingbench will put a session to sleep between transactions. • Benchmark Run Time: This is the total time that Swingbench will run the bench for.

After this time has expired, Swingbench will automatically log off the sessions. This graphic shows a typical example with 120 concurrent sessions.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

32

Chapter 5: Installation and Configuration

Chapter 5: Installation and Configuration

Overview

Introduction to installation and configuration

This chapter provides procedures and guidelines for installing and configuring the components that make up the validated solution scenario. The installation and configuration instructions presented in this chapter apply to the specific revision levels of components used during the development of this solution. Before attempting to implement any real-world solution based on this validated scenario, gather the appropriate installation and configuration documentation for the revision levels of the hardware and software components planned in the solution. Version-specific release notes are especially important.

Contents This chapter contains the following topics:

Topic See Page

Navisphere 34

PowerPath 37

Install Oracle Clusterware 42

Data Domain 47

NetWorker 57

Multiplexing 62

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

33

Chapter 5: Installation and Configuration

Navisphere

Overview Navisphere Management Suite enables you to access and manage all CLARiiON

advanced software functionality.

Register hosts The Connectivity Status view in Navisphere, seen in the image below, shows the

new host as logged in but not registered.

Install the Navisphere host agent on the host and reboot. The HBAs will then automatically register, as shown in the following image.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

and Oracle Recovery Manager using NFS Proven Solution Guide

34

Chapter 5: Installation and Configuration

The Hosts tab shows the host as unknown and the host agent is unreachable; this is because the host is multi-homed, that is, the host has multiple NICs configured, as shown in the following image.

A multi-homed host machine has multiple IP addresses on two or more NICs. You

can physically connect the host to multiple data links that can be on the same or different networks. When Navisphere Host Agent is installed on a multi-homed host, the host agent, by default, binds to the first NIC in the host. To ensure that the host agent successfully registers with the desired CLARiiON storage system, you need to configure the host agent to bind to a specific NIC. To bind the agent to a specific NIC, you must create a file named agentID.txt. Stop the Navisphere agent, then rename or delete the HostIdFile.txt file located in /var/log, as shown in the following image.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

and Oracle Recovery Manager using NFS Proven Solution Guide

35

Chapter 5: Installation and Configuration

Create agentID.txt in root; this file should only contain the fully qualified hostname of the host and the IP address HBA/NIC port that the Navisphere agent should use. The agentID.txt file should contain only these two lines and no special characters, as shown in the following image.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

Then stop and restart the Navisphere agent; this re-creates the HostIdFile.txt file

binding the agent to the correct NIC. The host now shows as registered correctly with Navisphere, as in the following image.

and Oracle Recovery Manager using NFS Proven Solution Guide

36

Chapter 5: Installation and Configuration

PowerPath

Overview EMC PowerPath provides I/O multipath functionality. With PowerPath, a node can

access the same SAN volume via multiple paths (HBA ports), which enables both load balancing across the multiple paths and transparent failover between the paths.

PowerPath policy

After PowerPath has been installed and licensed, it is important to set the PowerPath policy to “CLARiiON-Only”. The following image shows the powermt display output prior to setting the PowerPath policy.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

The I/O Path Mode is shown to be unlicensed.

and Oracle Recovery Manager using NFS Proven Solution Guide

37

Chapter 5: Installation and Configuration

Once the PowerPath Policy has been set correctly, all paths are now alive and

licensed. The previous image shows the powermt set policy command and the powermt display command output for CLARiiON LUN 80. It lists the eight paths for this device. These paths are managed by PowerPath. Since the SPA owns the LUN, the four paths to SPA are active, and the remaining paths to SPB are passive. All ASM diskgroups are then built using PowerPath pseudo names. Note A pseudo name is a platform-specific value assigned by PowerPath to the PowerPath device.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

38

Chapter 5: Installation and Configuration

Because of the way in which the SAN devices were discovered on each node, there was a possibility that a pseudo device pointing to a specific LUN on one node might point to a different LUN on another node. The emcpadm command was used to ensure consistent naming of PowerPath devices on all nodes.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

The following image shows how to determine the available pseudo names.

and Oracle Recovery Manager using NFS Proven Solution Guide

39

Chapter 5: Installation and Configuration

The next image shows how to change the pseudo names using the following command: emcpadm renamepseudo –s <xxx> – t <yyy>

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

This table shows the PowerPath names associated with the LUNs used in the ASM

diskgroups.

Diskgroup Purpose Diskgroup Name Path CLARiiON

LUN

Data files DATA /dev/emcpowerac 10

/dev/emcpowerad 8

/dev/emcpowerae 2

/dev/emcpoweraf 0

Online Redo Logs REDO /dev/emcpowere 65

/dev/emcpowerf 64

/dev/emcpowerg 63

/dev/emcpowerh 62

/dev/emcpoweri 61

/dev/emcpowerj 60

/dev/emcpowerk 59

/dev/emcpowerl 58

/dev/emcpowerm 57

/dev/emcpowern 56

/dev/emcpowero 55

/dev/emcpowerp 54

/dev/emcpowerq 53

/dev/emcpowerr 52

/dev/emcpowers 50

and Oracle Recovery Manager using NFS Proven Solution Guide

40

Chapter 5: Installation and Configuration

/dev/emcpowert 51

Temp/Undo TEMP /dev/emcpoweru 22

/dev/emcpowerv 20

/dev/emcpowerw 16

/dev/emcpowerx 18

Flash Recovery FRA /dev/emcpowery 23

/dev/emcpowerz 21

/dev/emcpoweraa 19

/dev/emcpoweab 17

High availability health check

To verify that the hosts and CLARiiON are set up for high availability, install and run the naviserverutilcli utility on each node to ensure that everything is set up correctly for failover. To run the utility, use the following command: naviserverutilcli hav –upload – ip 172.<xxxxxxx>

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

In addition to the standard output, the health check utility also uploads a report to the

CLARiiON storage processors that can be retrieved and stored for reference.

and Oracle Recovery Manager using NFS Proven Solution Guide

41

Chapter 5: Installation and Configuration

Install Oracle Clusterware

Overview Oracle 11g Clusterware was installed and configured for both production nodes. Below are a number of screenshots taken during the installation, showing the configuration of both RAC nodes.

Cluster installation summary

The image below shows the installation summary screen.

Configure ASM and Oracle 11g software and database

Before configuring Oracle and ASM, EMC recommends reviewing the Oracle Database Installation Guide 11g Release 1 (11.1) for Linux. The following general guidelines apply when configuring ASM with EMC technology:

• Use multiple diskgroups, preferably a minimum of four, optimally five. Place the Data, Redo, Temp, and FRA in different (separate) diskgroups.

• Use external redundancy instead of ASM mirroring. • Configure diskgroups so that each contains LUNs of the same size and

performance characteristics. • Distribute ASM diskgroup members over as many spindles as is practical for

the site’s configuration and operational needs.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

42

Chapter 5: Installation and Configuration

Partition the disks In order to use either file systems or ASM, you must have unused disk partitions available. This section describes how to create the partitions that will be used for new file systems and for ASM. When partitioning the disks it is important to align the partition correctly. Intel-based systems are misaligned due to the metadata written by the BIOS. To correctly align the partition and ensure improved performance, use an offset of 64 KB (128 blocks). This example uses /dev/emcpowera (an empty disk with no existing partitions) to create a single partition for the entire disk.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

43

Chapter 5: Installation and Configuration

ASM diskgroup creation

The Oracle DBCA creates the ASM data diskgroup for the ASM instance. You can then create additional diskgroups.

ASM uses mirroring for redundancy. ASM supports these three types of redundancy:

• External redundancy. • Normal redundancy: 2-way mirrored. At least two failure groups are needed. • High redundancy: 3-way mirrored. At least three failure groups are needed.

EMC recommends using external redundancy as protection is provided by the CLARiiON CX4-940. Refer to the CLARiiON configuration setup.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

44

Chapter 5: Installation and Configuration

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

Database installation

Once the ASM diskgroups were created, Oracle Database 11g 11.1.0.6.0 was installed.

and Oracle Recovery Manager using NFS Proven Solution Guide

45

Chapter 5: Installation and Configuration

The Oracle environment was patched to 11.1.0.7.0.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

46

Chapter 5: Installation and Configuration

Data Domain

Introduction Data Domain DD880 integrates easily into existing data centers and can be

configured for leading backup and archiving applications using NFS, CIFS, OST, or VTL protocols. This solution is deployed using NFS. The Data Domain appliance was configured with two 10 GbE optical cards for connection to the backup network.

Data Domain Enterprise Manager

The following image shows the Data Domain Enterprise Manager.

When integrating a Data Domain appliance for use in an environment that also had NetWorker and RMAN deployed, it is best practice to create multiple shares on the appliance. You can then access these shares as either NFS or CIFS shares on the NetWorker storage nodes.

Create multiple shares

Creating the shares involves mounting the appliance “/backup” directory to a server and creating the required directories. The number of directories required is determined by the total number of NetWorker storage nodes that will access the restorer and the total number of streams required by each server. Each NetWorker stream requires an individual device.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

47

Chapter 5: Installation and Configuration

1. To mount /backup, create a suitable mount point on the Linux box, for example:

mkdir /ddr/masbackup Enter the following command: mount -t nfs -o hard,intr,nfsvers=3,proto=tcp,bg 192.168.0.1:/backup /ddr/masbackup

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

2. When /backup is mounted, create new subdirectories using a command such as: mkdir enc06 mkdir pm1

3. In this use case, the NetWorker server is a Windows 2003 server. Therefore a CIFS

share is required for this Windows host. Use the GUI to set up shares, select: GUI > Maintenance > Tasks > Launch Configuration Wizard > CIFS

and Oracle Recovery Manager using NFS Proven Solution Guide

48

Chapter 5: Installation and Configuration

4. Select the authentication method. In this example, Workgroup authentication was used.

5. In the “Enter workgroup name” field, enter the workgroup name, and the WINS Server name in the “WINS Server” field, if applicable.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

49

Chapter 5: Installation and Configuration

6. Add the appropriate backup user name and password.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

50

Chapter 5: Installation and Configuration

7. Enter the Backup Server list. In this example, * was used. An asterisk (*) gives access to all clients on the network.

Create a CIFS share The CLI can then be used to create a CIFS share for use by the NetWorker server.

1. Enter the following command to create a CIFS share:

cifs share create share enc06 path /backup/enc06 clients 192.168.0.2 writeable enabled users backup

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

51

Chapter 5: Installation and Configuration

2. Check that the share is available on Windows. Select Start – Run, and enter: \\path\dir

Note The devices should not be mounted on Windows; this is only used to verify the UNC path.

3. Because the CIFS device is on a remote server, it is important that NetWorker has the correct permissions to access the remote device. To achieve this, the NetWorker service must log on a specific account instead of the default local system account. This account must be the same as that specified earlier as the backup user.

4. Ensure that the permissions on the share are correct. As the share was created on Linux, root is the owner, therefore permission must be granted to other users and groups.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

52

Chapter 5: Installation and Configuration

5. It is then possible to create a new device and label it. Users should not edit device files and directories. This action is not supported, and such editing can cause unpredictable behavior making it impossible to recover data.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

and Oracle Recovery Manager using NFS Proven Solution Guide

53

Chapter 5: Installation and Configuration

6. Below is a typical device after labeling.

Set up NFS shares The Oracle RAC nodes and NetWorker proxy server are all Linux-based; therefore NFS shares are also required.

1. Use the Data Domain GUI to set up shares.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

and Oracle Recovery Manager using NFS Proven Solution Guide

54

Chapter 5: Installation and Configuration

2. Select GUI > Maintenance > Tasks > Launch Configuration Wizard > NFS.

3. In the Backup Server List field, add all servers to the list. NFS shares are then added to the appliance; you specify the client and the path.

4. Enter the following command to add new clients:

nfs add /backup/pm1 192.168.0.3

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

and Oracle Recovery Manager using NFS Proven Solution Guide

55

Chapter 5: Installation and Configuration

5. Display the client list by entering:

nfs show clients

6. Mount the devices on the Linux host, for example: mount -t nfs -o hard,intr,nfsvers=3,proto=tcp,bg 192.168.0.1:/backup /ddr/masbackup

The new devices can then be added to NetWorker.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

56

Chapter 5: Installation and Configuration

NetWorker

NetWorker introduction

The following NetWorker components were installed: • NetWorker Server

• NetWorker Server • NetWorker Management Console

• RAC nodes • NetWorker storage node • NetWorker Client • NMO

• Proxy node • NetWorker storage node • NetWorker Client • NMO

NetWorker configuration

Once the NFS shares are mounted to the appropriate servers, NetWorker can then mount and label the shares as adv_file type devices.

Because the NFS share is a remote device, the name format used is similar to the example below. The full path name is preceded by rd. rd=tce-r900-enc03.emcweb.ie/dd/backup3

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

57

Chapter 5: Installation and Configuration

NetWorker will then verify the path to the devices, and once verified the device is available to be labeled. When NetWorker labels an advanced file type device, it automatically creates a secondary device with read-only accessibility. The secondary volume is given a “_readonly” in its name, and then automounts this device. This enables concurrent operations, such as reading from the read-only device.

The NetWorker wizard was used to configure the client backups on each node.

The NetWorker Module for Oracle (NMO) was installed on each node to enable tight NetWorker integration with Oracle RMAN.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

58

Chapter 5: Installation and Configuration

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

59

Chapter 5: Installation and Configuration

Once the client was added successfully, it was then modified to set the following parameters:

• Number of channels • Backup level • Control file backup • Archive redo log backup • Filesperset

Testing was conducted using different numbers of RMAN channels, and the Data Domain appliance was configured with two 10 GbE optical NICs for connection to the NetWorker storage nodes. The filesperset parameter was tested at default and at one. EMC recommends setting this parameter to one ensures that multiplexing is not introduced, as this has a negative effect on the deduplication rates achieved. This is explained in greater detail in the next section.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

60

Chapter 5: Installation and Configuration

The wizard creates the RMAN script, as shown below, which can be modified if required. Refer to “Chapter 6: Testing and Validation” and “Appendix A: Scripts” for more details.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

61

Chapter 5: Installation and Configuration

Multiplexing

RMAN multiplexing

When using a deduplication appliance, such as a DD880, you should disable multiplexing. When creating backup sets, RMAN can simultaneously read multiple files from disk and then write their blocks into the same backup set. For example, RMAN can read from two datafiles simultaneously, and then combine the blocks from these datafiles into a single backup piece. The combination of blocks from multiple files is called RMAN multiplexing. Similar to NetWorker multiplexing, RMAN multiplexing has the same negative effect on deduplication The parameter that sets up multiplexing within Oracle is filesperset. The filesperset parameter specifies the number of files that will be packaged together and sent on a single channel to a tape device. This has the same effect as mixing bits from many files, and again makes it more difficult to detect segments of data that already exist. Therefore, to take full advantage of data deduplication, it is important to have the filesperset parameter set to one.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

62

Chapter 6: Testing and Validation

Chapter 6: Testing and Validation

Overview

Introduction to testing and validation

Storage design is an important element to ensure the successful development of the EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS solution.

Contents This section contains the following topic:

Topic See Page

Section A: Test results summary and resulting recommendations

64

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

63

Chapter 6: Testing and Validation

Section A: Test results summary and resulting recommendations

Description of the results summary and conclusions

Backups were run using a backup schedule consisting of level 0, or full backups only. For the purposes of this solution, Friday COB was deemed to be the start of the weekend. Archived redo logs and the control file were also backed up as part of each backup that occurred during this solution. Backing up the archived redo logs had a significant impact on the overall change rate of the database. The change rate of the database was 2 percent. However, because the archived log files were backed up on every backup, the change rate observed during incremental backups was actually much higher, closer to 10 percent. For this use case, EMC carried out a number of tests on the Oracle 11g OLTP backup and recovery infrastructure. At a high level:

• Orion validation • Swingbench

• Validate Swingbench profile • Backup from production • SnapView clone copy from production

• Data Domain deduplication • Restore

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

64

Chapter 6: Testing and Validation

Orion validation

Once the disk environment was set up on the CLARiiON CX4-960, the disk configuration was validated using an Oracle toolset called Orion. Orion is the Oracle I/O Numbers Calibration Tool designed to simulate Oracle I/O workloads without having to create and run an Oracle database. It utilizes the Oracle database I/O libraries and can simulate OLTP workloads (small I/Os) or data warehouses (large I/Os). Orion is useful for understanding the performance capabilities of a storage system, either to uncover performance issues or to size a new database installation. Note Orion is a destructive tool so it should only be run against raw devices prior to installing any database or application. This graph shows total throughput on a single node, with four metaLUNs, consisting of 40 disks.

This demonstrates the desired scaling.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

65

Chapter 6: Testing and Validation

Validate backup source RAC node or proxy node

The following image shows the processor utilization on a RAC node under the following conditions:

• Swingbench load only • Swingbench load plus a backup running on the node • Swingbench load plus a clone sync

This graph shows that using SnapView clones to create a copy of the production database significantly alleviates much of the backup overhead. The clone copy is mounted to the proxy host and the backup is then run from the proxy host.

Line 1: shows the total CPU utilization on a RAC node under Swingbench load simulating a production-like load on the database. Line 2: shows the overhead incurred when running a backup on the RAC node concurrently with the Swingbench load. Line 3: shows the overhead incurred when creating a clone copy of the production database while running the Swingbench load. Pointer 4: is the point at which the clone sync commenced. This was an incremental sync.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

66

Chapter 6: Testing and Validation

MetaLUN response times

The following graphs further illustrate the advantage of offloading the backup from the production node to the proxy node. They illustrate the CLARiiON metaLUN response times. The first graph below shows the response time from the metaLUNs assigned to the metaLUNs under a Swingbench load. These metaLUNs constitute the ASM DATA+ data group.

The following graph shows the same metaLUNs response time. Similar to the previous example, the Swingbench load is running against the cluster. In addition, an RMAN backup initiated by NetWorker is also running against Oracle RAC Node 1. The backup is running against the same LUNs that are serving the Swingbench load. The response time is higher for the duration of the backup.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

67

Chapter 6: Testing and Validation

The graph below shows the response time from the CLARiiON CX4-960 for the duration of the backup process. Here, the Swingbench load is running against the RAC node cluster. This initiates synchronization of the clone copy on completion of the sync, the database is put in hot backup mode. The clones are then fractured, the database is taken out of hot backup mode, and the clones are mounted to the proxy host. NetWorker then initiates the backup from the proxy host. The response time remains steady except for two short periods, explained below.

Pointer 1: The first increase occurs when the clone copy is initiated. Pointer 2: The second increase in response time occurs when the database is put into hot backup mode. The spike occurs when the database is put into hot backup mode because:

• Any dirty data buffers in the database buffer cache are written out to files and the datafiles are checkpointed.

• The datafile headers are updated to the system change number (SCN), captured when the begin backup command is issued. The SCN is not incremented with checkpoints while a file is in hot backup mode. This lets the recovery process understand which archive redo log files may be required to fully recover this file from that SCN onward.

• The datablocks within the database files continue to be read and written to. • During hot backup, an entire block is written to the redo log files the first time

the data block is changed. Subsequently, only redo vectors (changed bytes) are written to the redo logs.

Pointer 3: When the database is taken out of hot backup mode, the datafile header and SCN are updated. Pointer 4: The clone copy is then mounted to the proxy node and the RMAN backup is launched from NetWorker using the proxy host, which is a dedicated storage node.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

68

Chapter 6: Testing and Validation

The backup begins at data point 4. Because the clone metaLUNs are made up of a separate and independent group of disks there is no additional overhead on the production LUNs. Pointer 5: Backup completes.

Deduplication The following images show the data stored on the DD880 after five weeks running

the backup schedule. The backup schedule consisted of RMAN level 0 (full) backups only. That is, Level 0 backups on the weekend and RMAN level 0 backups Monday through Thursday. The database daily change rate is ≈ 2 percent. However, because the archived log files were also backed up, the change rate observed during incremental backups was actually much higher, closer to 10 percent. By eliminating redundant data segments, the Data Domain system allows many more backups to be stored and managed than would normally be possible for a traditional storage server. While completely new data has to be written to disk whenever discovered, the variable-length segment deduplication capability of the Data Domain system makes finding identical segments extremely efficient.

The storage saving graph above shows the data written to the DD880 over a five-week period. The backup cycle consisted of all RMAN level 0 (full) backups.

Line 1: "Data Written" shows that approximately 24 TB was backed up over the five-week period. Line 2: "Data Stored" tracks the unique data actually stored on the DD880 after inline deduplication. The remaining redundant data was eliminated. This results in a net saving of 92 percent of storage space required over the five-week period. Line 3: "% Reduction" shows the storage saving as a percentage over the five-week period. This results in a deduplication factor of 13:1.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

69

Chapter 6: Testing and Validation

Line 1: shows the deduplication factor of 13:1. A full only backup schedule made possible when using a deduplication appliance eliminates the restore penalty associated with an incremental backup schedule because the entire image is always available on the device for any given restore point. However, a backup schedule consisting of only level 0 backups is not always possible or practical.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

70

Chapter 6: Testing and Validation

The charts below show the same database backup cycle but in this case, the backup schedule employed a mix of both Level 0 full and Level 1 (incremental) backup. Refer to the EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using Fibre Channel Proven Solution Guide for more details.

Line 1: “Data Written” is much lower in this instance as the backup schedule employed incremental backups during the week; therefore much less data was sent to the appliance, approximately 10 TB. Line 2: “Data Stored” remains the same, however, as the Data Domain appliance identifies and saves only the unique data sent to the appliance. This reduces the overall data reduction as less redundant data is sent to the appliance. Line 3: "% Reduction" shows the storage saving as a percentage over the five-week period.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

71

Chapter 6: Testing and Validation

The graph below shows that during the weekdays when level 1 (incremental) backups are sent to the appliance, the deduplication rate decreases.

Line 1: shows the deduplication factor of 6:1. Note The graphs show the total “Data Written” to the DD880 increasing over time; this is also described as the logical capacity. The “Data Stored” refers to the unique data that is stored on the appliance. The “% Reduction” shows the storage savings gained from using Data Domain. Filesperset parameter When using a deduplication appliance, such as a DD880, it is best practice to ensure that multiplexing is disabled. The parameter that sets up multiplexing within Oracle is filesperset. To take full advantage of data deduplication, it is important to set this parameter to one. The graphs below show the effect of setting filesperset (FPS) to the default.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

72

Chapter 6: Testing and Validation

Line 1: Shows the deduplication rate when the filesperset parameter is set to one. Three weeks into the backup cycle, the deduplication factor is over 11:1. Line 2: Shows the deduplication rate over the same time period when the filesperset parameter is set to default. The deduplication factor achieved now only reaches 8:1. Therefore, when you set the filesperset parameter to the default, the percentage storage saving is lower than that achieved when it is set to one. The following graph shows the effect on the percentage storage saving when the filesperset parameter is set to one versus setting it to the default.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

73

Chapter 6: Testing and Validation

Line 1: The filesperset parameter is set to one, there is a saving of over 91 percent of storage requirements. Line 2: When the filesperset parameter is set to the default you save 87 percent. This clearly demonstrates the effect of the filesperset parameter on deduplication rates. Setting the parameter to one achieves a four percent improvement in storage capacity savings.

Restore Data Domain’s Stream-Informed Segment Layout (SISL) technology ensures

balanced backup and restore speeds. The backup schedule utilizes only full backups every day. This is possible because only unique data is stored on the DD880 appliance. This schedule has the advantage that when a recovery is required for any point in time only a single restore is required as incremental or differential restores are not required. This greatly improves the RTO. When you implement an incremental backup schedule, as shown, you will need a multi-stage restore operation (to restore Thursday's backup). If in the worst case scenario a restore of Thursday’s backup is required, then a multi-stage restore operation is necessary.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

74

Chapter 6: Testing and Validation

You must first restore the full weekend backup; after this restore is successful, you must restore each weekday’s incremental backup one after the other, and only Thursday data is restored. When using a Data Domain deduplication appliance, it is possible to implement a full only backup schedule. Therefore only a single restore is required, regardless of where in the schedule the data restore is required.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

75

Chapter 7: Conclusion

Chapter 7: Conclusion

Overview

Introduction to conclusion

This Proven Solution Guide details an Oracle infrastructure design leveraging an EMC CLARiiON CX4-960 array, EMC Data Domain DD880, and EMC NetWorker. Also included are various test results, configuration practices, and recommended specific Oracle storage design layouts that meet both capacity and consolidation requirements. This document describes many technologies that enable the benefits outlined below.

Conclusion Traditional hardware compression provides substantial cost savings in Oracle

environments. However, in this solution data deduplication significantly reduces the amount of data that needs to be stored over an extended period of time. This solution offers cost savings both from a management standpoint and in the numbers of disks or tapes required by a customer to achieve their long-term backup strategy. Data deduplication can fundamentally change the way organizations protect backup and nearline data. Deduplication changes the repetitive backup practice of tape, with only unique, new data written to disk, therefore the deduplicated backup image does not carry the restore penalty associated with incremental backups because the entire image is always available on the device. This eliminates the need for incremental restores. The test results show that, in an environment utilizing RMAN full backups, data deduplication ratio of over 13:1, resulting in a 92 percent saving in the storage required to accommodate the backup data, makes it economically practical to retain the savesets for longer periods of time. This reduces the likelihood that a data element must be retrieved from the vault and can significantly improve the RTO. Although cost savings are generally not the initial reason to consider moving to disk backup and deduplication, financial justification is almost always a prerequisite. With the potential cost savings of disk and deduplication, the justification statement becomes, “we can achieve all of these business benefits and save money.” That is a compelling argument. The solution meets the business challenges in the following manner:

• Ability to keep applications up 24x7 • Faster backup and restores – meet more aggressive backup windows,

and restore your key applications in minutes, not days • Reduced backup windows – minimize backup windows to reduce

impact on your application and system availability

• Protect the business information as an asset of the business • Reduced business risk – restore data quickly and accurately with built-

in hardware redundancy and RAID protection • Reduced backup windows – minimize backup windows to reduce

impact on your application and system availability

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

76

Chapter 7: Conclusion

• Efficient use of both infrastructure and people to support the business • Improved IT efficiency – save hours of staff time and boost user

productivity • Correct costs / reduce costs – match infrastructure costs with changing

information value via efficient, cost-effective tiered storage

In summary, utilizing the solution components, in particular CLARiiON technology, EMC Data Domain, and EMC NetWorker software, provides customers with the best possible backup solution to prevent both user and business impact. Business can continue as usual, as if there is no backup taking place. In customer environments where, more than ever, there is a trend toward 24x7 activity, this is a critical differentiator that EMC can offer.

Next steps EMC can help to accelerate assessment, design, implementation, and management

while lowering the implementation risks and costs of a backup and recovery solution for an Oracle Database 11g environment. To learn more about this and other solutions contact an EMC representative or visit http://www.emc.com/solutions/application-environment/oracle/index.htm.

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

77

Appendix A: Scripts

EMC Backup and Recand Oracle Recovery Manager using NFS Proven Solution Guide

78

overy for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker,

Appendix A: Scripts

Clone copy process

The following is a overview of the steps taken to create a clone copy of the database. The clone copy is then mounted to the proxy host prior to backup. The naviseccli commands were used to sync the proxy clone. It was necessary to perform the clone fracture in two stages to facilitate a log switch after the database was taken out of hot backup mode. naviseccli -h 172.30.226.20 snapview -syncclone -name data1 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name data2 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name data3 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name data4 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name fra1 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name fra2 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name fra3 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name fra4 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name temp1 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name temp2 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name temp3 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name temp4 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name ocr -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name voting -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo1 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo2 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo3 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo4 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo5 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo6 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo7 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo8 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo9 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo10 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo11 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo12 -cloneid

Appendix A: Scripts

0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo13 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo14 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo15 -cloneid 0200000000000000 naviseccli -h 172.30.226.20 snapview -syncclone -name redo16 -cloneid 0200000000000000 The naviseccli commands below were used to fracture the proxy clone. naviseccli -h 172.30.226.20 snapview -consistentfractureclones -CloneGroupNameCloneId data1 0200000000000000 data2 0200000000000000 data3 0200000000000000 data4 0200000000000000 temp1 0200000000000000 temp2 0200000000000000 temp3 0200000000000000 temp4 0200000000000000 ocr 0200000000000000 voting 0200000000000000 redo1 0200000000000000 redo2 0200000000000000 redo3 0200000000000000 redo4 0200000000000000 redo5 0200000000000000 redo6 0200000000000000 redo7 0200000000000000 redo8 0200000000000000 redo9 0200000000000000 redo10 0200000000000000 redo11 0200000000000000 redo12 0200000000000000 redo13 0200000000000000 redo14 0200000000000000 redo15 0200000000000000 redo16 0200000000000000 naviseccli -h 172.30.226.20 snapview -consistentfractureclones -CloneGroupNameCloneId fra1 0200000000000000 fra2 0200000000000000 fra3 0200000000000000 fra4 0200000000000000 -o

NetWorker RMAN backup script

The RMAN script below is a typical example of one used to generate backups through the NetWorker console. This example shows an eight-channel incremental level 0 backup to tape. Each backup was assigned a tag ID, which was later used as part of the restore process. RUN { ALLOCATE CHANNEL CH1 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH2 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH3 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH4 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH5 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH6 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH7 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH8 TYPE 'SBT_TAPE'; BACKUP INCREMENTAL LEVEL 0 FILESPERSET 1 FORMAT '%d_%U' TAG= 'RUN529' DATABASE PLUS ARCHIVELOG; backup controlfilecopy '+FRA/ORCL/control_backup' tag= 'RUN529_CTL'; RELEASE CHANNEL CH1; RELEASE CHANNEL CH2; RELEASE CHANNEL CH3; RELEASE CHANNEL CH4; RELEASE CHANNEL CH5; RELEASE CHANNEL CH5; RELEASE CHANNEL CH7; RELEASE CHANNEL CH8; }

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

79

Appendix A: Scripts

EMC Backup and Recovery for Oracle 11g OLTP Enabled by EMC CLARiiON, EMC Data Domain, EMC NetWorker, and Oracle Recovery Manager using NFS Proven Solution Guide

80

Oracle RMAN restore script

The restore process consisted of first allocating eight channels, then restoring the controlfile, mounting the database, and performing the restore database command using the tag ID assigned earlier. Below is a sample restore script. RUN { ALLOCATE CHANNEL CH1 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH2 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH3 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH4 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH5 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH6 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH7 TYPE 'SBT_TAPE'; ALLOCATE CHANNEL CH8 TYPE 'SBT_TAPE'; restore controlfile from tag'RUN529_CTL'; alter database mount; restore DATABASE from tag'RUN529'; RELEASE CHANNEL CH1; RELEASE CHANNEL CH2; RELEASE CHANNEL CH3; RELEASE CHANNEL CH4; RELEASE CHANNEL CH5; RELEASE CHANNEL CH6; RELEASE CHANNEL CH7; RELEASE CHANNEL CH8; }